Posted
Comments 0

Consciousness has been a long-standing conundrum for many reasons. Historically, the intellectual and academic need to understand ourselves came first: the observer observing the process of observation. You may call it meta-observation or metacognition. Trying to understand that process one meta-level further up is not straightforward. Each school of philosophy has its own understanding of consciousness and of how it fits into the puzzle of the known universe. Those disagreements are unlikely ever to be resolved, because almost all of these schools consider consciousness to involve qualia: something irreducible, something undeconstructable. It is something you can say if it is there, but it cannot be defined because there is nothing more basic by which to define it. That is the curse of trying to apply axiomatic principles to language; it kind of works, up to a point.

From a more pragmatic point of view, a proper, universal, and timeless definition is not needed as long as we can somehow determine where something stands on the spectrum of consciousness. Our society requires the presence of consciousness in order to assign responsibility. If you are deeply asleep and cause harm to someone, you will most likely be acquitted in court. Our language works in the same direction: we do not say that AI makes decisions; we say that an algorithm (or a machine), processes the available information to recommend an optimised solution. Or at least that was the case until relatively autonomous AI agents came along. Now we delegate more and more real-world capabilities to these agents, and as a consequence, both our language and our understanding of responsibility are shifting toward the realisation that this delegation, while perhaps very useful and profitable, comes with a price tag.
Are we ready to accept the payment? Aside from some media rhetoric, I don’t see any sign that we are!

Author
Categories Artificial Intelligence, human condition

Posted
Comments 0

We have all heard stories about how violence was integrated into the rituals and daily life of the Vikings or the Mongols. Maybe the most famous example is the Roman gladiators. The idea is simple: rulers try to normalise violence, because it can produce better soldiers and a broader support system—especially if a ruler intends to conquer somebody somewhere.
Today I listened to a podcast with Julia Ioffe about the decriminalisation of domestic violence in Russia. As one paper put it: “In February 2017, President Vladimir Putin signed amendments that removed criminal liability for first-time family battery that causes no serious bodily harm.” (The Guardian). The underlying notion feels similar: if violence becomes more widespread—or more socially tolerated—it may become easier to recruit soldiers, and public reaction to “the meat grinder” in Ukraine may become more muted.
That brings me to the United States and gun control. For a long time, my understanding of resistance to tougher gun regulations rested on two factors. First, the dark side of tradition: maybe it is wrong, but many people believe it is inseparable from national identity. Second, the financial incentives: Gun & Ammunition Stores revenue was $23.1B in 2024 (IBISWorld).
But viewed through the lens of normalising violence, making firearms more accessible can also shape a public mindset—one that may make militarisation and recruitment easier, and make foreign aggression feel more acceptable. The pure American tradition of school shootings every week, and more recently, there have been allegations around aggressive enforcement practices. That’s the small price to pay if you wanna be an empire.

Author
Categories society, human condition

Posted
Comments 0

A long time ago, we had souls. That was the name we put on everything we don’t understand about human inner life or why they behave the way they do.
With time, various religions appropriated our souls with the promise to provide really nice real estate for them after our physical bodies are no longer. In return, we were obliged to follow some rules, most of them written, but if you obey whoever is in charge, that’s your ticket.
With the Renaissance and the development of sciences, we couldn’t maintain the term soul any longer; it was too much religiously charged. It was gradually renamed to consciousness. Nevertheless, the sentiment of the mysterious inner life of humans was very attractive purely from an ego-preserving perspective.
In order to prevent various curious minds from deconstructing it and reducing our “divine” inner making to some clock-like machinery. Philosophers have a special way to prevent that from happening: declare consciousness to be qualia. Nothing on the surface or even deeper cannot deconstruct qualia because qualia is undeconstructable by definition. We can discuss any observable features, but to have qualia, you need a “special sauce” or essence, which is again simply the name of our ignorance about our inner life.
Here is an illustration of how the qualia of consciousness work. A thought experiment, imagine a being which looks like us, behaves like us and claims to be conscious, but actually everything is pretended (simulated). The being has no personal experiences nor internal life, and everything is just an act. This is called “philosophical zombie”. There is no way (by definition) for us to detect such a being because there is no way to enter someone’s mind (even with fMRI) and see what their experiences are.
Having such protection of our consciousness works well on a social level. For example, how to judge someone is that someone was with diminished cognitive capacities (less conscious) at the time of… Do we need to recognise animal rights or soon enough AI rights, if they do not conform with our understanding of consciousness? With the rise of AI, I don’t think we can keep our precious ego intact much longer. Prepare to open your mind one last time!

Author
Categories human condition, Artificial Intelligence

Posted
Comments 0

I was listening to Calum Chase , who was mostly discussing post-singularity economics. His contribution was more about asking good questions than offering functional answers. As I listened, I suddenly had a light-bulb moment.
The “age of abundance” is a popular term used to describe the post-AGI era, assuming productivity levels beyond our imagination. Humans will have to adapt—perhaps unwillingly, but with few viable alternatives. The challenges we’ll face can be divided into two broad categories: money and meaning.

Money: How will I survive in an extremely dynamic and mostly scarce job market? Throughout most of human history—and especially today—what we do is a significant part of our identity and our sense of living a meaningful life.
Everything seems to revolve around UBI: what to include and who distributes it. Chase argues that all social programs tend toward corruption, citing the socialist experiments in Eastern Europe, the USSR, China, and other nations in the latter half of the 20th century. I don’t think that’s universally true—the Scandinavian social model proves that heavy redistribution can work without endemic corruption. Still, a large part of the world (e.g., the developing world and much of the USA) would face real risks of corruption during redistribution.
On the business side, big tech companies will hold the keys: the models, the algorithms to create new ones, the hardware, the energy sources, and so on. In essence, most of the AI replacing human labour will be their property. The current tax systems (and the economic models underpinning them), developed as they are, are not equipped for such a dramatic shift. The situation could escalate rapidly due to the enormous profits and unseen-before restructuring involved.

Here’s my idea: Along with centrally distributed UBI, why not create tax incentives for big tech companies to offer what I’d call “social positions” (there’s probably a better term for it). A social position in a company would be a person who technically works for the company but isn’t expected to contribute unless they choose to. These roles would mostly be remote, and from the company’s perspective, they’d have someone who works almost for free (sort of like a summer temp job, but better) and doesn’t take up physical space.
A social position allows someone to contribute just enough to suppress that gut feeling of uselessness. Companies could offer minimal access to their healthcare and resources, like hardware, software, and workspace. However, access to these resources would be merit-based: the more you contribute, the more access you get.

The core idea revolves around a central dilemma: How can we avoid killing capitalism, while also making sure capitalism doesn’t kill us after the singularity? My (perhaps wishful) thinking says: there must be a middle ground somewhere…

Author
Categories Artificial Intelligence, society

Posted
Comments 0

Apparently, the consciousness debate is getting hotter, and there is no settling on anything on the horizon.
First of all, the debate is mostly about the definition. Most philosophers insist on a metaphysical definition involving qualia, essence, or some other category (like a philosophical zombie), making the definition unfalsifiable. That is a well-established philosophical tradition that philosophy could be pre-science, under-science, or over-science, but never science. Because of that, parallel philosophical schools of thought can exist without disturbing each other. How can we tell which one is true when each one has its own definition of truth? The practical dimension of that denial is clear: if we agree to whatever falsifiable (scientific) definition of consciousness (and a benchmark would follow), the AI guys will make a model which would be more conscious than humans in less than a year. The only conclusion I think is that, although some schools of philosophy try to deal with that subject from a practical perspective, philosophy has almost never been about answering questions, only asking good ones. I’m not dismissing anybody; philosophy is a great intellectual exercise and an exciting subject for dinner conversation in the right company.
I sympathise, our consciousness has been our unique human feature for the known history of thought. Naturally, we are very attached and protective of it. One could consider consciousness a synonym for human qualia, so defining it would, in a way, betray humanity.

If we get back to Earth, the first question we need to answer is: Why do we need a theory or definition of consciousness? If we decide that we need it for knowledge’s sake, we will continue the same tradition indefinitely. If we decide that we need that for some practical purpose, like: do we need to change our laws to accommodate or regulate AI in some way? …or how far AI has progressed, having feelings, and are these “feelings” something we should care about? etc… Once we have some clear practical context, we could create a falsifiable definition/theory of consciousness. At first, we will call it an artificial consciousness (or synthetic sentience) against natural (human) consciousness. With time, the two will gradually merge, so the problem of social acceptance of said definition will be the decisive one, but that is a battle for another day.

Allow me to play the devil’s advocate for a minute. I claim that some group of people (e.g., Eskimos) are not conscious. They unknowingly just pretend to be. They behave as they are and say they are, but they actually aren’t! How would you convince me (or anybody) that they are conscious? Well, on a common sense level, they seem to be by any well-observing individual. Is that proof? I don’t think so. Second, there are experts from behaviourists to neurologists. If that is a court case, and almost every expert agrees that would be enough, but… all the tests are functional, they register the validity of certain functionalities, which with each test increases the probability, but never concludes irrefutably. So, for most practical purposes, that would be sufficient. Although if the stakes are really high, say a robot asking for law protection, most of the public would require something more tangible than a bunch of know-it-alls saying so…
The first breaking point of defenders of natural consciousness would be when a natural mind is implanted into an artificial one (aka digital clone). As someone is dying and decides to transfer/copy their mind into an artificial being, they would require the new self to have rights… and downhill from there.
The point I’m trying to make is: if we would like to have a functioning society in the age of AI, we need some agreement about a working definition of consciousness. The definition may evolve as any knowledge, but at any stage will be recognised as part of the social consensus and legal system. At present (2026), all AI companies are united in claiming that their models are NOT conscious, including fine-tuning the models to claim that themselves. I leave for your imagination to guess why that is…

Author
Categories Artificial Intelligence, society