1. Is your own existence (as an experience) your measure of real-world existence? If it is not, what is?
2. Say something meaningful. Now say something meaningless. Are you certain that a person who doesn’t know you can always tell which is which?
3. Humans are meaning-making machines. What is a meaning for you, in a sense, what would be enough to declare that you understand the meaning of something?
4. We are confined by our common sense. Can you imagine something completely impossible and new, something you have never heard before? It’s a good mental exercise to deconstruct your common sense and challenge its limits.
5. Do you remember a discussion that changed your mind? If yes, was it more due to a good argument or your opponent personality?
6. When you meet a new fascinating person, would you like to impress them or to be impressed by them? …or you wouldn’t disturb the flow of the conversation with such silliness?
7. Which one do you think is more wrong: to stereotype yourself or to stereotype others?
8. How often do you consider anything beyond your own experiential and intellectual horizon (mental box)?
9. Have you ever thought about yourself in good/bad person terms or do you prefer “imperfect human being” as more ambiguous?
10. Do you think that your sense of humour can get you out (psychologically and socially) of any problem situation?
11. Do you trust your guts (your intuition)? If you do, do you care where your intuition is coming from?
12. Do you think that “the beauty will save the world”, or love or technology or anything of that scale? What does it even mean to save anybody from themselves?
I bet there is a philosopher sleeping somewhere inside me, what you see here is his snoring.
What is consciousness? Is it possible to have an artificial one? How can we say that we are looking at one? If we accept that someone (or something) has one, how do we treat it? If consciousness is a spectrum (continuous feature) where is the acceptance threshold? …and many more like this.
In simpler times, in magic times, we had souls. A soul is something tangible, divine even, that encapsulates our individuality, our memories, our wisdom, and “lives” forever. After some painful years on Earth, our souls will look down on our decaying bodies with a grin and be one with all the good souls, if we are in the right department. Now the idea of a soul is restricted to a religious context only. Now we have consciousness, but we still feel the need to protect what was soul representative of. That’s why (the idea of) consciousness is surrounded by many layers of protection. Let’s peel some of them.
Unless you have been conditioned to submit your entire self to some “great” idea or deity, you would like to think you are unique and the way you can tell your uniqueness is through your experiences. Only you can possibly know what it is to be you. That is very important for developing a healthy self and ego. You deserve your place under the sun because of your deep, rich but mostly unique experiences (a delusional entitlement, but hey…). Your intuition tells you that even though that may not be entirely true you need to make it so or at least behave like it is. As a result, as long as your experiences are identical to your consciousness you need the concept of the latter to protect your unique identity.
What about artificial consciousness (AC)? If AC can be indistinguishable from the natural one we have two problems: first — we have a hard time respecting fellow humans (as it is) if they are not within our line of thinking. AC cries to be rejected by the same “logic” even before seeing the light of day. Second — having a fully functional AC opens the route to forbidden knowledge. That means, my own consciousness could be deconstructed and then modelled and then manipulated. Albeit it has already happened to some degree, understanding how consciousness works would make me less of a human in the sense that we will see ourselves as something with moving parts (robot-like), which is simply wrong, disgusting even.
We would never agree on a test for consciousness. I’m not talking about the Turing test, which already has been passed and there is GPT3. Any “official” (recognized by the law and society at large) test could be targeted by AI guys and eventually passed. And then, it may require some respect. And then, maybe declare some artificial thing to have the same rights as myself. Can you imagine anything more absurd?
Every discipline dealing with consciousness in some way (like neurology or psychology) has an operational understanding of consciousness fit to a particular purpose of the undergoing study. So if we need a definition of consciousness in a specific context there are rarely any debates. Now, what about consciousness “in general” (philosophically speaking), then we stumble on that “subjective vs objective” conundrum. How do we define consciousness without context? We simply assume a context based on our intuition and that context could vary widely from person to person. Hence the wide variety of irreconcilable ideas about consciousness. Exploring this confusion is another way to protect the idea of consciousness.
Whatever scientific approach we apply towards consciousness, it will consider and process the facts of consciousness as something objective. The trick is that by the established definition consciousness is something intrinsically subjective, so it could never be approached by conventional science. The scientific approach reveals a lot of the machinery of consciousness (aka “the easy problem”), but in the end, by making it something objective we lose its essence (subjectivity) in the process. In the literature, having such subjective qualia is known as the hard problem of consciousness.
Or another way to put it: to create a scientific theory of consciousness you need a format (quantitative, or any other) in which the theory can be falsified. Without that format there the theory cannot be tested and falsified. Currently, there isn’t such a format for consciousness and it won’t be one for a long time (I don’t like term forever). Hence, consciousness will stay theory-free because of all the other arguments within the current piece.
Consciousness is a prerequisite for free will. Our socially accepted understanding is that only conscious beings can make decisions, so without consciousness, we have just some force of nature with no free will. Consequently, without free will how can we hold anybody responsible for their actions, and without personal responsibility, society deteriorates. As a result, without consciousness society will decline. In this sense, you may consider the concept of consciousness a socio-political necessity.
So in times of sparse magic, we need the idea of consciousness to protect our identity, our dignity, our rights and our society (you know, the important stuff). Similar to what Voltaire said, “If God did not exist, it would be necessary to invent him.”, so is consciousness.
I realise the enormity of the subject of consciousness (or do I). Here I’m only trying to get you thinking…
Our intuition is a powerful tool, sometimes hardly distinguishable from our rational process (as being part of it). An observant person would notice some caveats:
even at times, it can surprise us, intuition is mostly conservative due to status quo bias.
intuition has a strong correlation with a person’s emotional state.
it is quick (evolution made) but unreliable and with a limited number of choices.
you can train your intuition but few people would do this systematically (on purpose).
Essentialism is a way to identify things by their intrinsic attribute(s), sometimes called qualia. Another way to put it, it’s a method to intuitively encapsulate an agent or functionality into something undeconstructable. It feels right (comfortable) but it is very unproductive because restricts your analytical perimeter.
Essentially, that is an approach that puts labels on things that feels intuitively right. It is related to mysticism with the idea that some things have deep meaning (or essence) that is unknowable to us.
Let’s take a popular example: consciousness and the so-called “hard problem”. How can we be certain that a machine (AGI) really experiences the world? How can we be certain that a fellow human experiences the world? It’s because they say so and I can observe their reactions and behaviour, but most importantly because I can identify with them and I am conscious, so…
That is to say, I have a unique essence. I can recognize intuitively that essence in another human, and for most of us, that is quite enough. There is no recognized test for consciousness, even we have the Turing test as a crude initial attempt. For now, the only way AGI can be recognized as conscious is to trick enough humans with their intuitions and feelings that the machine really experiences the world. And don’t let me start how that involves emotions, common sense, etc. — maybe in another post.
The moral of the story is: value your intuition (especially if properly educated) but do not allow it to lock you out of things that at the moment you may not understand but you need to keep an open mind for further investigations.
A couple of days ago I was rewatching the movie “Wall-E”. I like it a lot, mostly because it’s thought-provoking.
The most curious thing was, the robot Wall-E was more relatable than humans. He (it) has a personality, character, stamina and he is full of quirks. He reacts emotionally to events and has the ability to attach himself to other animated things. He is curious and explores without obvious benefit.
Some of his characteristic traits are just wishful thinking, we have certain core values and would like to see them in other beings. The most curious for me was the combination of these with his quirks, starting from the way he looks to his silly behaviour. That reminds me of wabi-sabi — Japanese aesthetics based on imperfection.
When developing AI the general consensus is that it should be better than us and now (2021) it is better than us in many domains. Still, so-called common sense is so far elusive to AI developers. The motivation of making AI a better version of ourselves is motivated by two reasons: the economic impact and the general presumption that we are the pinnacle of evolution so we must continue that way. If the man is the reference (the competitor) we need to make AI superior to humans in any way possible.
The question is, would you like to talk to and have around something (or somebody) perfect? I can’t deny the benefits of that with Siri, etc, but where is the fun in that. Here is my point: we need personality, peculiarity, even eccentricity and a sense of humour in order to relate to our artificial helper and enrich our communications.
Some initial attempts are made in computer games AI characters (avatars), but this needs to continue, much much further. The initial characteristics could be based on the master character and environment, but please, strongly restrict the ability of the master to customize the helper personality. With time the helper will adapt to the master’s needs and even copy some personality traits. Later the helper will find what are the master’s real needs, those of which the master is barely aware of. Knowing something intimate and using it usually pose a moral dilemma, so the balance here is paramount (remember the movie “Her”).
I think due to the strong nerd background of AI, looking for specific human imperfections and integrating them into AI is a largely underappreciated area of AI development.
I don’t like and barely use stereotypes, but this one is a good occasion for an exception. If the Western civilization fails and probably destroys itself in the process it would be because of wider and wider discrepancies between the technological powers we possess and the intellectual and moral levels of the majority of the western countries population. The discrepancies by themselves are not immediate danger but in combination with widespread hostility between social groups, it gets increasingly unsafe. The hate is fueled by many public figures (divide and conquer) but it has its natural roots.
Why does not understanding our fellow citizens or our society leads to hate? The primitive reflex we all have is to be afraid of things out of our understanding. The unknown scares us and for a good reason, the darkness of the night, an animal we haven’t seen before, people acting in a weird way or dramatically overreacting, the evolution tough us that stable and predictable environment is safer. Anything out of expectations smells of danger and we loath to be stressed all the time because our livelihood or life is threatened. The process goes like this: unaccustomed change -> anxiety -> fear -> anger -> hate.
Here the distinction between smart and stupid people comes into play: clever people are able quickly to understand the underlying mechanism of new things because they have a general sense of how things work. Not-so-smart people usually try to accommodate the new thing by experimenting with/on them predominantly by the method of try and error. That method is very slow, unreliable, and most of all dangerous. That is why the more stupid a person is, the more conservative (in a general sense) they are and that is their survival strategy. You may say that stupid people have a tendency to be bad people for a very practical reason, they need to identify otherness, and their shallow understanding of the world leads to superficial criteria. Homophobia, racism, anti-intellectualism, conspiracy thinking, religious fanatics, … the list is extensive, but the underlying motive is the same — lack of understanding and rejection of anything out of expectations. If there is one thing that would guarantee the collapse of society that is ever-increasing intolerance. On top of that, the open-world changes we are trying to mitigate are coming at an increasing rate, which makes NOW a very good time to panic, hence “the former guy” in the US and all the right-wing populist politicians in the EU. Don’t let me started on conspiracy theories, anti-vaxxers, and many more grievances of mine.
The technological acceleration will sooner or later provide the bad actors with critical for our survival level of technology. Let me draw you a simple picture. Say, at the present, we have a handful of people who can destroy humanity. It could be a nuclear war or some engineered super-virus or AGI out of control. The number of these people will increase exponentially (due to the techno-acceleration), maybe by a factor of 1.2 per year (a modest estimation). If we start with 10 people (the year 2021) and multiply by 1.2 each year, after 80 years we will have more than 20 million people. How long do you think our species will survive with 20 million people with access to the Armageddon button?
I’m not telling this to feel smug and superior or to suggest any new rules. I’m just wondering when and how the story of humanity will end. My bet is: by the end of the century and more likely by some bio-agent (e.g. engineered super-virus) than nuclear war or AI.