Apparently, the consciousness debate is getting hotter, and there is no settling on anything on the horizon.
First of all, the debate is mostly about the definition. Most philosophers insist on a metaphysical definition involving qualia, essence, or some other category (like a philosophical zombie), making the definition unfalsifiable. That is a well-established philosophical tradition that philosophy could be pre-science, under-science, or over-science, but never science. Because of that, parallel philosophical schools of thought can exist without disturbing each other. How can we tell which one is true when each one has its own definition of truth? The practical dimension of that denial is clear: if we agree to whatever falsifiable (scientific) definition of consciousness (and a benchmark would follow), the AI guys will make a model which would be more conscious than humans in less than a year. The only conclusion I think is that, although some schools of philosophy try to deal with that subject from a practical perspective, philosophy has almost never been about answering questions, only asking good ones. I’m not dismissing anybody; philosophy is a great intellectual exercise and an exciting subject for dinner conversation in the right company.
I sympathise, our consciousness has been our unique human feature for the known history of thought. Naturally, we are very attached and protective of it. One could consider consciousness a synonym for human qualia, so defining it would, in a way, betray humanity.
If we get back to Earth, the first question we need to answer is: Why do we need a theory or definition of consciousness? If we decide that we need it for knowledge’s sake, we will continue the same tradition indefinitely. If we decide that we need that for some practical purpose, like: do we need to change our laws to accommodate or regulate AI in some way? …or how far AI has progressed, having feelings, and are these “feelings” something we should care about? etc… Once we have some clear practical context, we could create a falsifiable definition/theory of consciousness. At first, we will call it an artificial consciousness (or synthetic sentience) against natural (human) consciousness. With time, the two will gradually merge, so the problem of social acceptance of said definition will be the decisive one, but that is a battle for another day.
Allow me to play the devil’s advocate for a minute. I claim that some group of people (e.g., Eskimos) are not conscious. They unknowingly just pretend to be. They behave as they are and say they are, but they actually aren’t! How would you convince me (or anybody) that they are conscious? Well, on a common sense level, they seem to be by any well-observing individual. Is that proof? I don’t think so. Second, there are experts from behaviourists to neurologists. If that is a court case, and almost every expert agrees that would be enough, but… all the tests are functional, they register the validity of certain functionalities, which with each test increases the probability, but never concludes irrefutably. So, for most practical purposes, that would be sufficient. Although if the stakes are really high, say a robot asking for law protection, most of the public would require something more tangible than a bunch of know-it-alls saying so…
The first breaking point of defenders of natural consciousness would be when a natural mind is implanted into an artificial one (aka digital clone). As someone is dying and decides to transfer/copy their mind into an artificial being, they would require the new self to have rights… and downhill from there.
The point I’m trying to make is: if we would like to have a functioning society in the age of AI, we need some agreement about a working definition of consciousness. The definition may evolve as any knowledge, but at any stage will be recognised as part of the social consensus and legal system. At present (2026), all AI companies are united in claiming that their models are NOT conscious, including fine-tuning the models to claim that themselves. I leave for your imagination to guess why that is…