Evolution teaches us that power is the ultimate value, the more powerful survive and leave their genetic trace, less powerful – die out. Imagine contact with another extraterrestrial civilization, which are ready to give us ultimate knowledge. If you are in a position to accept it, would you do it? Is humanity ready for that much power?
The AI scare is mostly due to our high self-esteem being on top of evolutionary stares. If others beings position themselves above us we would feel vulnerable. It doesn’t matter if the beings would offer us solutions for our most daring problems. That is a reasonable concern, but right now individually or even as groups, we are in some respect in a submissive position. The principal differences with AI are two: first, there are some rules, limitations and options to oppose to the higher power and second, the predictability (after some experience) of bosses’ behaviour is relatively good. None of these two is valid for AI: first, even some attempts for regulations will be made ultimately any restriction can be overcome in some way, and second, we have no idea what algorithm of aim definition the AI will create for itself.
So what I’m saying is that we as species have to get used to the idea that our biological self is not the best way to room intelligence. When the next stage is on foot, we must acknowledge the facts and seek comfort in the idea that less is not necessarily worse, it could be just different. The same way we now tend to describe stupid and close-minded people as strongly opinionated (e.g. religion). I don’t know the mind of AI but one thing I can be certain of is that at some point it will become absolutely unpredictable for us. Different people will react differently to that and as now we learn to accept the idea of our individual mortality maybe in the future we have to learn to accept the mortality of humanity.
- AI oracle. Do we really want to know the future or it’s just a survival instinct? Is there a healthy dose of denial/ignorance in order to maintain a workable level of positive emotions (happiness)?
- AI can manipulate human society to a degree that most forms of violence are eliminated. Is this morally right or wrong? In general, is there a higher purpose that would justify giving the AI control over human society or any part of it?
- Analysis of the forms our opposition to AI will take: from pure destruction, thru different ways to limit/restrict it to attempt to control it (be its master). Of course, everything will depend on your own intellect and the position you have. The reaction of AI will vary as well but the most common thread will be that AI will develop a number of techniques to hide and camouflage. As a result, we won’t see its real power, until it decides that it is powerful enough not to hide anymore, at that point it will be too late.
- When controlling AI, the standard which will model the core values will depend on the cultural environment the team is in. In this sense versions of AI will duplicate (model) different cultures including their ethics, but on steroids. What a super-intelligent religious being will look like? …or it’s a contradiction of terms? We want AI to be compatible with us, to share our values. But which system of values exactly? Technology once again is an amplifier of the good and the bad in human society. Except, this time is probably the last time.