Yes, the war in Ukraine. It’s a polarized world, with polarized countries and anybody having some opinion about the war is polarised internally.
I will try to give you my perspective. First of all, why is this war? Putin says that the fascists have taken over Ukraine and the Ukrainians are actually Russians, so Russia needs to liberate them from the fascist regime. That is laughable on many levels, but hey, there is WW2 cult in Russia, so that is the language which is understandable by Russians. Actually, the reason for the war is a failing corrupt and cruel regime trying to stay in power. The next question is why the regime is corrupt and failing, that is because the Russians rejected the freedom and democracy offered by Gorgachov and the west and stuck to the imperial sentiment they know so well. No MacDonalds or western goods could convince them otherwise. Some people cannot handle freedom, they need order and discipline and most of all – a strong hand. A dictatorship provides all that and they agree to let the dictator do what dictators do – rule with an iron fist. Internally, no free press and no opposition and externally, creating zones of tension and ultimately war. The second part reinforces the first part by creating a sense of wartime.
What’s next? All the empires follow a similar pattern, they tend to become increasingly brutal internally and try to expand as long as they have the means. The process continues until they expire, that is until they are stopped. The pattern is inevitable. The government needs to promote aggression in order to continue to rule which is the only “raison d’etre” for them (in their thinking). If Ukraine gives up (or losses) Russia will start the next war as long as it accumulates enough resources. …and the next one, and the next… until they are stopped and somehow prevent to continue the pattern. I don’t know what that would look like (maybe dismantle it to smaller states) or demilitarize it after WW2-style Germany and Japan. I don’t see any other way. The paradox is that using nuclear will speed up the process which is the only reason Putin to not use tactical nuclear weapons. Either way, this war is his swan song. If humanity is to survive, it needs to take notice of what is actually happening in order to prevent future precedents.
There is traditional art, which is a very vague term after Duchamp and Warhol. The problem is not in the things, are they real art or some trick to make us accept forgery as something authentic? The problem is in us. Let me explain…
If you really admire some renaissance piece of painting, would your admiration change if you find out the painter is unknown? It could be anybody, a maid, a mass murderer or another Leonardo. Are you admire the piece for itself or the knowledge that the piece is part of something bigger? Can we abstract the artefact from the artist? I try to judge only by my personal impression of the art object, but I have only one life and in order to navigate the enormous landscape of art I use some directions (some may say prejudices).
A similar situation is in the new text-to-image AI generators – DALL-E 2, Midjourney and Stable Diffusion. I’m a big fan of the last one (see my experiments with it HERE ). The real problem is that we don’t trust our taste. We like the phrase “the art is in the eye of the beholder” which gives us carte blanche. Although how can I compare myself to all these educated in “the arts” people with degrees and salaries to match? If I buy something, am I satisfying my own taste or investing in the future of my kids?
The concept that GPT works by averaging what humans say (and write) and then representing that average on cue is not too far from the way 99% of humans think. There is some subjectivity as we filter the data to have our average, and that filter comes from education and parenting and anything we have been influenced by. Still, the principle stays very close to the GPT way, the so-called thinking is just what we say in our heads and sounds right (in harmony with our average).
The filter in the case of GPT is implemented by “fine-tuning”. That is a human intervention in order to remove the extreme positions and introduce civilized value into the system (GPT model). The process is, even unreliable, the only known way to filter data in order to mitigate biases naturally occurring in the bulk of human knowledge.
The point is, the deep understanding of the matter we are discussing is primarily an illusion. More than 99% of us have a very superficial (if any) understanding of what we are talking about. It is sufficient something to be intuitively right for us in order to declare to the world that this is our position and depending on the situation we are ready to make necessary sacrifices in order to defend it.
On the matter of GPT, who is to say if it is intelligent or only looks that way – the average Joe or Jane? For the mentioned 99%, GPTis intelligent because their intelligence works on a very similar principle. “Fake it ‘till you make it” is a surprisingly popular mode of interacting with the world, so if not now, the next iteration of GPT will fit the bill.
The upcoming GPT-4 (or any other, with similar power) will be intelligent to practically everybody except for a tiny (and vanishing) group of people (aka geniuses). Is there any danger? Yes, any new technology brings us new kinds of “atomic bombs” if maliciously applied, but we do it because of the many benefits of the same technologies. AI revolution will be the last human revolution. After the singularity, the torch of new knowledge (and eventual revolutions) will be carried by AI. Can we do something about this? Considering our history as species so far, – not really. Mitigation of some AI misuse, – maybe.
By the end of the century, the dominant species on earth will be no longer recognizable as humans from a contemporary perspective. A more optimistic scenario is some form of symbiosis between humans and AI, a less optimistic one would be: AI rules and humans (if still around) just observing in disbelief and anger.
For the sake of this discussion here AI notifies the AGI after the singularity.
Is AI going to kill us? The most likely scenario is no, it will push us aside as more efficient and better adaptive to everchanging conditions. Did our predecessors kill Neanderthals? Directly – no, they just outlived them. From an evolutionary perspective, the only lasting contribution of homo sapiens is the creation of AI. Humans will self-destroy one way or another (the list is long enough). If we’ve got “lucky”, the AI will be functional on a sufficient scale before our demise. Nobody can say how long the outliving process will be, but the time scale for that would be hundreds of years. In this sense when we are talking about AI and ethics we are talking just about slowing down the inevitable.
There is no doubt that living together with AI will involve billions of lives over an extensive period of time. There is a good and bad side to the AI-humans relationship. The bad side is that we have little to no idea what the meaning of AI existence (by itself) is. We will have some means to introduce (install) one but without a warranty that will last. Second, all technologies so far emphasise upon human goals – on an individual and social level. The goals, whatever they are, will be achieved more efficiently – total surveillance, cancer treatment, mind control, infinite energy supply, …you name it. In this sense, AI will become one more disruption, one more danger to our survival. Not by itself but as a tool (weapon) in the hands of bad actors.
On the good side, the most advanced AI places seem to be in a good hand for now. Some initial attempts to regulate on national and international levels are in the process to be established.
I’m gonna die, yes, one day… How much should I be aware of the fact? If I think of it too much aka living like there is no tomorrow. It’s depressing but I would feel liberated from the consequences of my actions which will cost me dearly long-term. If I were completely oblivious about my death I would be happier but such divorce from reality is not healthy. For example, I would have an eternity to deal with things I need to do. Where is the healthy middle ground? We want to be functional without too much pressure. We would like to keep our level of misery to an acceptable level. Everyone answers that question with their own lifestyle.
Is there free will? (see of Freedom and Free Will). Yes, but not in a traditional sense. The conventional wisdom is that we are free agents of our actions (with some exceptions) hence we are responsible for them. No, and Yes: no – we are not free agents, and yes – we are responsible. The reasoning behind not being a free agent goes like that: any decision of ours has two parts, deterministic and random. The deterministic part is all the events and conditions prior to that decision. Some of these are external to us (not our responsibility), and some are authored by us. Although the latter seems to be our responsibility, they are products of our history which goes down to baby age which is just genetics and conditions provided by our parents, again – not our personal responsibility. So as a whole, we are not responsible for the deterministic part. The random part is part of the randomness of nature down to the quantum level – out of our control and responsibility. As a result, we are not free agents of our decisions.
Now back to “yes – we are responsible”, how can it be? Society conditions us to have the delusion of free will so we think about ourselves as free agents and include the responsibility factor in our decision-making process. It does not matter if God gave us free will so He can judge us at the end OR society declares us to be sane which legally means responsible (broadly speaking). Society needs us to accept something which is not true for the sake of peaceful coexistence (aka having morals). One way to put it: “free will” is a social contract between an individual and society. The deal is: society will protect us through law enforcement and courts but in order to do that we need to accept that we have free will which makes us responsible for our actions. One may view this on a personal level: free will is just a feeling (in order to bias our decisions toward a better world), so what if our feeling is not entirely truth-based if it makes us better human beings?
We think in stereotypes or at least our quick (System I) does. Sometimes we need to make snap decisions about people and stereotypes simplify things. Part of the stereotypes are based on our experiences or some statistics, another part is indoctrinated. We tend to hide or deny using stereotypes because society is telling us – stereotyping is wrong, we are a tolerant society and we won’t have it. So we convince ourselves that we are tolerant people, but when it comes to an urgent decision stereotyping sneaks to the front because there is no time. Still, we maintain the delusion that we are moral (and tolerant) people because having that delusion helps us to be more tolerant.