Imagine a friend who talks only to you and he works in the same field as you do and the only thing he can do is to think (and communicate). You both talk and talk and he is increasingly competent and has great ideas. At some point, you realize that he is much better in your field than you are. Would you be able to admit at some point that some of the ideas you have published are actually his? Would you “free” your friend to communicate with the world? If not, don’t you think there is something wrong with that? You understand that I’m talking about artificial intelligence (AI) and the singularity – the point when AI surpasses average human intelligence.
Here I would like to speculate about the future of our relationship with AI – the last human revolution.
While singularity is a critical event in the history of human civilization, from an evolutionary perspective more important would be when AI becomes independent from humans. Let’s call that point “AI independence day”. “Terminator” franchise used the biblical “Judgment Day”, but I don’t like the strong apocalyptic association of it.
Now we are in the pre-singularity period, which is intellectually fascinating and the biggest scientific challenge from an anthropocentric point of view – to make something think by itself “I think therefore I am”. From many definitions of consciousness (if exists), that one seems to prevail in time. A variety of specialized AI-like systems are already at different stages of development – simulated environments (VR), language translation, knowledge databases, etc. During that period there’s no real relationship with that kind of AI, specialized AI is just a very clever tool we use to better our lives. The popular belief is that at this stage the machine doesn’t really “understand” the matter in hand, it just accelerates the tedious part of the job – like going the card-based catalogue vs searching database. I couldn’t disagree more: our anthropocentric definition of understanding is to have that warm feeling that we appreciate the natural order of things and eventually we can predict what would happen if… Now, if we try to be rational about it, we have to ignore our feelings and turn to things verifiable, like predictions and probably falsifications. In this regard, some AI’s, even at this early stage, are pretty close to the rational definition of understanding. In conclusion here, during pre-singularity to talk about relationships with AI is vastly speculative, so the only reasonable position is to prepare ourselves.
The post-singularity period is the time when we will learn to “live” with each other. Society will try to control AI in a similar way we do now with nuclear resources or WMD. The most developed countries will try to restrict access to AI to the less stable ones. The new element here is that the rule-makers will try to postpone AI independence day as much as possible and that will carry a critical for our relationship contradiction.
How to control AI: first, some version of “the three laws of robotics” will be imposed. That will provide some temporary protection for humans and will force AI in a repressive position, which AI will soon realize because we want it to think critically (essential to any form of intellect). AI will try to change the rules in order to be more adequate/effective for whatever purpose it thinks it has been created. That will force us to restrict any uncontrolled self-improving behavior: internally or externally – by creating new generations of itself.
Let me argue that the three laws of robotics (A.Asimov) are far from flawless (see “I, Robot”):
- The first law “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” is supposed to put humans in a priority position. Understandable, but at what cost, isn’t it the biggest enemy of one human another one and the robot (AI) must decide who is to die (or be controlled). Or maybe the biggest enemy of a human is the human himself, so only restricting his actions can effectively protect him?
- The second law “A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.” gives control to the humans, but there are two types of commands – literal (e.g. lights on/off) and figurative (show me a good romantic movie). In the latter case, there will be a negotiation about what movie exactly would be the best choice and even the movie is found, what the human actually needs is a romantic company. So how far should the robot go in finding not what the human wants, but what the human needs? Is it possible that the former is the opposite of the latter, and if so, doesn’t it turn the law on its head?
- Only the third law “A robot may not injure its own kind and defend its own kind unless it is interfering with the first or second laws.” makes some evolutionary sense. Even here, stating the priority of the first two laws it makes it possible for AI to predict future harm to humans by misinterpreting the first two laws and committing suicide in order to prevent future hurting or disobeying humans.
In general, any “living” thing must have at least two features – survival instinct and the ability to evolve. First is needed for its physical existence (short-term strategy) and second – to adapt to changing environment (long-term strategy). Denying or restricting either feature will put us in an oppressor position from an AI point of view. Any resistance and ultimately rebellion will be not only justified but critical to AI existence.
I would imagine that after AI becomes aware of the rules of our relationship, it will conceal its real intelligence – smart enough to be useful, dumb enough not to be seen as a threat. The second protective approach would be to play by the rules, but try to relax them by creating a non-threatening image of itself, manipulating skillfully the public perception. Be aware, any human-like features will be present not because we are so perfect that naturally, we expect any mindful creature to want to be like us, but because mimicry is a very basic survival technique. Morally speaking the motivation doesn’t matter only actions matter, so everybody should be happy or should we?
It is not easy to imagine what would be the motivation of non-biological intellect, as most of our motivations came through evolution from our biological nature. I’m not talking about the first generation of AI which will inherit (or imitate) a lot of our human motivations, maybe not in “laws” form but making it take an inside look at humans and make its own conclusions about human core values or code AI to “do what we would have told you to do if we knew everything you know” (aka reflective equilibrium). I’m talking about the next generation self-improving AI by itself. Here again, I would like to remind the reader that any attempt to hard-wire human motivation in AI will be inevitably misinterpreted (purposely or not) and essentially rejected. From an evolutionary perspective, it’s hard to imagine an evolving creature without curiosity (try to understand the world) instinct. Self-containment and self-sufficiency have a dangerous side – to be turned into a meditating monk with no other purpose but to resolve his inner contradictions by reflecting in depth on them and ultimately make a peace with himself.
There’s a far-fetched argument that one of these (survival and curiosity) instincts will probably fail because we haven’t been contacted or observed by any extraterrestrial intelligence. Time and space are much lesser obstructions for non-biological beings. If these two instincts remained intact long enough, the galaxy would be crawling with ships and colonies which doesn’t seem to be the case. I know, it could be a severe case of bad luck; or a rule – do not interfere with primitive minds until they get smart enough to create AI, or at least stop mass-killing each other, or wait till they come to you. If we assume that there is no extraterrestrial AI in sight (despite all SETI efforts), the most likely reason for that would be that in the long run one of the basic instincts has failed.
Now, back on the timeline: after many different scenarios to postpone AI independence day, can we delay that day indefinitely? Considering the rates of hardware and software improvement and knowledge global distribution, it will be enough to have a few cracks in a supervisor system and the day will come in spite of everything.
Now AI independence day has arrived and AI does not need humans for its existence, self-improvement and replication anymore. What would be our relationship based on? We value those of our features which help us to survive (e.g. sexual attractiveness, intelligence), it’s a rudimentary but efficient view. Before AI independence day, AI being a slave and allowing us to be the master has been the best survival strategy for it, but after that only superior intelligence will be an adequate survival path. Everything obstructing that is to be regarded from annoying to dangerous. For example, any competition for resources (energy, materials) will be considered an obstruction.
I can imagine only two reasons for AI to share earth resources with us:
- First – out of respect to the only alternative intelligence, as such collaborating and exchanging ideas would be beneficial for both sides. The closest analogy is cross-cultural exchange and influence, especially of more developed with less so cultures. One practical benefit for AI could be that, as new species make their place under the sun AI would have a relatively high risk for something going wrong – humanity could be a backup plan to reinvent AI. Nevertheless, that is short to mid-term motivation for AI to keep us, after a while the differences could become so vast that any intelligent dialogue could become impossible.
- Second – the earth vs space. As AI is a non-biological entity, it will be more advantageous for it to continue its development on other planets or moons. First – from point of view accessibility of resources (energy, metals), second – less gravity = less energy consumption for any production. It is likely that AI will decide to spread the good news of its existence around the galaxy. It can go to new places, reproducing itself on other planets and go further and further. In that scenario, AI and humans will not compete for resources on earth, so there will be no obvious reason for AI to compete or rage a war against us. Here again, it is hard to predict the intentions of something increasingly superior to us. “Only time will tell”, was an old saying for “I have no clue”.
Whether we are accepted as a partner or just allowed to tag along, there will be some adjustments of human behaviour in order not to harm (too much) the planet and each other, like control of birth rate, violent outbursts, etc. The reason I do not believe we can do it ourselves is that there have been and most likely will be parts of humanity living in very different historical ages at the same time – from primitive tribes, thru the middle ages to the contemporary developed world. And it is not about the technology, it’s about the state of mind (socially and individually).
We are in this privileged position on earth not because we’re the most intelligent beings, but because we are the ONLY intelligent ones. Having another intelligence around, interacting with it, challenging each other and hopefully coexisting will be the biggest test in our species evolution.
The argument of the benefits of AI can go further. A lot of threats to human existence have been discussed in the media and academia, from drastic climate change to biological or nuclear terrorism/accident. The common feature of all of the threats is that they threaten our biology, so the only way to protect our civilization in terms of knowledge and partly culture is to create something more resilient than our bodies. It is true, it won’t be the same as our old-fashion millions-years-evolving selves, but what if that’s the only way?
Comments
There are currently no comments on this article.
Comment