Posted
Comments 0

There is traditional art, which is a very vague term after Duchamp and Warhol. The problem is not in the things, are they real art or some trick to make us accept forgery as something authentic? The problem is in us. Let me explain…
If you really admire some renaissance piece of painting, would your admiration change if you find out the painter is unknown? It could be anybody, a maid, a mass murderer or another Leonardo. Are you admire the piece for itself or the knowledge that the piece is part of something bigger? Can we abstract the artefact from the artist? I try to judge only by my personal impression of the art object, but I have only one life and in order to navigate the enormous landscape of art I use some directions (some may say prejudices).
A similar situation is in the new text-to-image AI generators – DALL-E 2, Midjourney and Stable Diffusion. I’m a big fan of the last one (see my experiments with it HERE ). The real problem is that we don’t trust our taste. We like the phrase “the art is in the eye of the beholder” which gives us carte blanche. Although how can I compare myself to all these educated in “the arts” people with degrees and salaries to match? If I buy something, am I satisfying my own taste or investing in the future of my kids?

Author
Categories Artificial Intelligence

Posted
Comments 0

The concept that GPT works by averaging what humans say (and write) and then representing that average on cue is not too far from the way 99% of humans think. There is some subjectivity as we filter the data to have our average, and that filter comes from education and parenting and anything we have been influenced by. Still, the principle stays very close to the GPT way, the so-called thinking is just what we say in our heads and sounds right (in harmony with our average).
The filter in the case of GPT is implemented by “fine-tuning”. That is a human intervention in order to remove the extreme positions and introduce civilized value into the system (GPT model). The process is, even unreliable, the only known way to filter data in order to mitigate biases naturally occurring in the bulk of human knowledge.
The point is, the deep understanding of the matter we are discussing is primarily an illusion. More than 99% of us have a very superficial (if any) understanding of what we are talking about. It is sufficient something to be intuitively right for us in order to declare to the world that this is our position and depending on the situation we are ready to make necessary sacrifices in order to defend it.
On the matter of GPT, who is to say if it is intelligent or only looks that way – the average Joe or Jane? For the mentioned 99%, GPT is intelligent because their intelligence works on a very similar principle. “Fake it ‘till you make it” is a surprisingly popular mode of interacting with the world, so if not now, the next iteration of GPT will fit the bill.
The upcoming GPT-4 (or any other, with similar power) will be intelligent to practically everybody except for a tiny (and vanishing) group of people (aka geniuses). Is there any danger? Yes, any new technology brings us new kinds of “atomic bombs” if maliciously applied, but we do it because of the many benefits of the same technologies. AI revolution will be the last human revolution. After the singularity, the torch of new knowledge (and eventual revolutions) will be carried by AI. Can we do something about this? Considering our history as species so far, – not really. Mitigation of some AI misuse, – maybe.
By the end of the century, the dominant species on earth will be no longer recognizable as humans from a contemporary perspective. A more optimistic scenario is some form of symbiosis between humans and AI, a less optimistic one would be: AI rules and humans (if still around) just observing in disbelief and anger.

Author
Categories Artificial Intelligence, society

Posted
Comments 0

For the sake of this discussion here AI notifies the AGI after the singularity.

Is AI going to kill us? The most likely scenario is no, it will push us aside as more efficient and better adaptive to everchanging conditions. Did our predecessors kill Neanderthals? Directly – no, they just outlived them. From an evolutionary perspective, the only lasting contribution of homo sapiens is the creation of AI. Humans will self-destroy one way or another (the list is long enough). If we’ve got “lucky”, the AI will be functional on a sufficient scale before our demise. Nobody can say how long the outliving process will be, but the time scale for that would be hundreds of years. In this sense when we are talking about AI and ethics we are talking just about slowing down the inevitable.

There is no doubt that living together with AI will involve billions of lives over an extensive period of time. There is a good and bad side to the AI-humans relationship. The bad side is that we have little to no idea what the meaning of AI existence (by itself) is. We will have some means to introduce (install) one but without a warranty that will last. Second, all technologies so far emphasise upon human goals – on an individual and social level. The goals, whatever they are, will be achieved more efficiently – total surveillance, cancer treatment, mind control, infinite energy supply, …you name it. In this sense, AI will become one more disruption, one more danger to our survival. Not by itself but as a tool (weapon) in the hands of bad actors.
On the good side, the most advanced AI places seem to be in a good hand for now. Some initial attempts to regulate on national and international levels are in the process to be established.

Author
Categories Artificial Intelligence, society

Posted
Comments 0

I’m gonna die, yes, one day… How much should I be aware of the fact? If I think of it too much aka living like there is no tomorrow. It’s depressing but I would feel liberated from the consequences of my actions which will cost me dearly long-term. If I were completely oblivious about my death I would be happier but such divorce from reality is not healthy. For example, I would have an eternity to deal with things I need to do. Where is the healthy middle ground? We want to be functional without too much pressure. We would like to keep our level of misery to an acceptable level. Everyone answers that question with their own lifestyle.

Is there free will? (see of Freedom and Free Will). Yes, but not in a traditional sense. The conventional wisdom is that we are free agents of our actions (with some exceptions) hence we are responsible for them. No, and Yes: no – we are not free agents and yes – we are responsible. The reasoning behind not being free agent goes like that: any decision of ours has two parts, deterministic and random. The deterministic part is all the events and conditions prior to that decision. Some of these are external to us (not our responsibility), some are authored by us. Although the latter seems to be our responsibility, they are products of our history which goes down to baby age which is just genetics and conditions provided by our parents, again – not our personal responsibility. So as a whole we are not responsible for the deterministic part. The random part is part of the randomness of nature down to the quantum level – out of our control and responsibility. As a result, we are not free agents of our decisions.
Now back to “yes – we are responsible”, how can it be? Society conditions us to have the delusion of free will so we think about ourselves as free agents and include the responsibility factor in our decision-making process. It does not matter if God gave us free will so He can judge us at the end or society declare us to be sane which legally means responsible (broadly speaking).

We think in stereotypes or at least that is our quick (System I) thinking. Sometimes we need to make snap decisions about people and stereotypes simplify things. Part of the stereotypes are based on our experiences or some statistics, another part is indoctrinated. We tend to hide or deny using stereotypes because society is telling us – stereotyping is wrong, we are a tolerant society and we won’t have it. So we convince ourselves that we are tolerant people, but when it comes to an urgent decision we rationalize the stereotype we use because there is no time or enough information. Still, we maintain the delusion that we are moral people because having that delusion helps us to be more tolerant.

Author

Posted
Comments 0

You take diazepam and you calm down. You drink double scotch and … We can alter our phycological state at will. Are these altered states our experiences? It’s hard to say – No. We know a bit about bio-chemistry regulations and accept that all induced experiences are ours. It does not matter how far the alternation goes and there are many other ways to shape our state of mind. Nothing new or surprising so far, isn’t it?

The point is that advancing our knowledge about what gives us our experiences is slowly but certainly being deconstructed. At some point, probably with the help of AI, we will get to the bottom of this. There is no conceivable reason not to. A model which can predict our experiences in any thinkable situation is not as far as you’d like to think. Is the model of my consciousness conscious itself? Functionally speaking, what’s the difference? Well, mine is made out of meat, and I need to protect its importance, aka ego. Being a good model, so the model “feels” the same. There is no way out of this except we declare some mystical qualia, some inexplicable essence of my experiences. As long as I can keep the concept of mythical consciousness in public discourse I can claim my privileged status and there is nothing to convince me otherwise.

Let me offer you a thought experiment one may say is an advanced version of the Turing test. Imagine that after significant advancements in neurology and computer science we are able to replace parts of the brain with functional equivalents made of “not-brain tissue”. These brain artificial parts (BAP) could be widely different in size and functionality. The question now is: where is the threshold above which we will consider the person to be legally different. What about replacing most of, or the whole brain? Will we need to enforce a declaration of BAP in order to legally decide if this is the same person or not? Our mind and our consciousness physically and functionally will be deconstructed, understood and modeled, that is the way of our evolution and adaptation.

Author
Categories human condition, Artificial Intelligence