Conscious AI has been a goal (and an occasional obsession) for the AI people and a growing worry for the rest of us. Why is it so important to develop conscious AI for some while others are having serious reservations about it?
First, we barely agree on what consciousness is? The main division line is so-called essentialism. We like to encapsulate our gut feeling of uniqueness (and hence — importance) as species into something undeconstructable. If anybody or anything can understand and simulate our consciousness, we would feel inferior to that thing. Not a good feeling! That is why the Turing test — the main point of which is that consciousness (on some level) is a trick we play to ourselves. Joshua Bach puts it more elegantly — Consciousness is the story our mind is telling itself about itself. So if our experiences (another name for consciousness) are just well-elaborated tricks, anything able to play a similar trick to us (e.g. passing the Turing test) must be conscious (or indistinguishable from it). Here comes the essentialism to the rescue: the consciousness is the essence to be human (all the rest are just moving parts), whatever test is constructed, it won’t be enough because it’s just a test, a simulation of a one-sided ability. “My essence cannot be reproduced or simulated by definition.” A society with its moral and legal systems agrees.
Second, we have an intuition that consciousness and intelligence in the case of humans are almost impossible to separate. Our internal monologue seems to be both: the above-water part of the iceberg of our conscious awareness and the hidden power of our intelligence. You cannot have the above-water part without the underwater mass. Our experiences are not what is happening to us now but what we remember about the world around us. When the time lag is very short we feel that experience as present, but it is still a memory. Our self-awareness is always post-factum because our consciousness is a part of the simulation of the world our brain plays in all the time in order to make decisions about our actions. If we define intelligence as the ability to create models of the world around us in order to make short and long-term predictions, we don’t need consciousness to have intelligence. For most purposes, the AI is and will be without consciousness, just modelling and inferring. Only for very specific purposes, like taking care of people, the AI probably will have some consciousness installed (under the motto — the customer is always right). The speculation that consciousness will appear naturally beyond some future point of AI complexity remains to be seen (I have my doubts).
Third, with AI taking a bigger and bigger role in our lives one starts to wonder. If some form of conscious AI enters the public arena, at what point do we have to start to respect and give rights to our little AI helpers. Remember the slavery, millennia of human history slaves were considered with status of animals — to be bought, sold, killed; they were just property. Nowadays, most people are interested in consciousness only in order to reject the idea of conscious AI and not to question the morality of switching their computers off.
Still, AI people say they are about to create a conscious AI, not because they genuinely think so be because it’s a good PR. There are so many pop-culture fantasies and gut feelings that should be exploited for the well-being sake of the same naive people, …or at least, that is the way AI experts sell the conscious AI to themselves.