Our fear of AI is mostly a fear of the unknown. Part of that fear actually is an instinctive one from concentrating too much power on one “individual” (human or not). Human history teaches us that societies tend to have more sustainable behaviour than individuals (the main reason the developed countries are democracies).
To make that more obvious, let’s imagine. After careful selection, access to a doomsday device is given to a number of individuals. The question is: what is the number of selected people above which our planet will go boom within a month: 1000, 1000 000…?
PS In not so distant future, buying AI and gene manipulation machines will be everything a person needs to create a pandemic virus (deadly, airborne and with more than a month incubation period).
The impact of AI on humanity can be understood better with an alien contact analogy. Imagine you are the first person contacted by freshly arrived aliens. They offer you to make a decision for them, what to do next. They possess technology far superior to ours – general AI, cure all diseases including death, unlimited source of energy, climate manipulation, you name it. It is up to you to decide: do they give us the technology. If yes – which ones and to whom, so that they will distribute the technology to be used by everyone. But be aware every part of that new technology has a dark side, for example – the ability to cure everything comes with the ability to genetically create new viruses as well as immortality (which may lead to earth’s overpopulation).
The technological changes are advancing (accelerating) much faster than society is adapting (adopting) them, as it is the current picture. Do we really wish to make that technological/scientific jump?