A while ago, we had gods. I like the Greek ones because they are so archetypical (our archetypes are probably modelled on them). From a practical perspective, it was clear — a person with a problem would call the respective divine department (god) and offers a deal: let’s say two years of rain and good crops for one sacrificed virgin. The logic was simple: we don’t know how to make rain but we know how to make deals, so we stick to the one we know plus a healthy dose of wishful thinking (aka religious thinking) and we seem to have a solution.
Nowadays the divine slowly but surely goes out of fashion and soon we will have AGI which we will see in a similar way ancient Greeks saw their gods.
AI has been in popular culture for several decades, mostly as a villain, not always with bad intentions but still… Projecting our insecurity on anything new and unknown is an old habit of humans. Everything we don’t understand makes us nervous and angry. That is an evolutionary way to prepare our bodies for any eventuality. I like the movie “Her”, Samantha (AI) has good intentions, but in the end, “she” hurts the protagonist. The moral of the story (for me) is that we rarely know what we want but when we do we usually settle for some superficial replacement because “life is short”.
Let’s go back to reality with the current state of AI affairs. AI is already in our lives from forecasting financial markets to Alexa. Specialization is the name of the game (so-called narrow AI). Even PA’s (personal assistants, like Siri) give us some flavour of what AGI could be, you don’t have to be an AI expert to reach their limits. So … specialized AI, beating any human on any game imaginable (AlphaZero and MuZero); protein folding (AlphaFold); you name it — we (DeepMind) will solve it. Other specialized AI marvels: GPT-3 and DALL-E of OpenAI. Specialization is a natural way, some may say (even a philosopher would disagree) there is no such thing as general knowledge, all knowledge is specialized (at least all useful knowledge). The general knowledge (if exists) only creates a good feeling that you have seen the world from a higher perspective (Olympus in the olden days). So the very front edge of AI (arguably AGI) now is not the general knowledge but meta-learning, methodology how to create knowledge.
How do we see and will see the AI and AGI? The same (as in ancient Greece) principle applies. You have a problem, so you hope and you are scared at the same time that AI will solve it. Let’s say you have the usual imposter syndrome at your workplace. Your hope is that AI will help you to resolve a particular headache you have been struggling with but you are afraid that you are becoming increasingly useless and at some point, you will join what Y.Harari calls the useless class. The same way a sailor prays to the god of the sea for his empathy (or pity) but knows that he is entirely at his mercy. You crave a date and Aphrodite is on permanent vacation, here is our dating site with AI experts on profile matching. Among the results you see, there are some really catchy ones, but how far I can rely on the algorithm (what about my gut feelings). You need more security, everything from encryption systems to killer robots is just waiting for you to invest with the bleak hope that the killer robot won’t harm you after have exterminated your enemies. The common trend here is that when we solve a problem we usually create another as the means become more and more powerful and dangerous. AI is intended to be the ultimate tool that will invent tools for us. Do we care that something smarter than us could have its own intentions?
On top of all this, the way we see AI reflects our general state of mind (e.g. anxiety/paranoia or optimism). The paperclip maximize is a good example of that. A philosopher feels comfortable in the world of thought and not so much when it comes to utility (usefulness). He creates a scary archetype of a utility function AI that will turn the known universe into paperclips if it has a chance (and yes, there are more layers into this).
Do you think that humanity is ready (or ever will be) for the ultimate power toy — AI? …and no, I don’t know I’m just putting one word after another.
Comments
There are currently no comments on this article.
Comment