Comments 0

A definition must be specific in a way that no other thing would be described by the same definition. A definition must be useful, usually by being part of a larger picture (context). For example, if we say that consciousness is the fact that we have experiences is specific enough but completely useless. Useless because there is no larger context, each and every one of us is its own world (subjective experience). On the other end of the spectrum is physicalism/materialism: consciousness is an emerging property of matter more specifically our mind. It emerges in the evolutionary development of our brain, neurons, etc. when a certain level of complexity is reached so we can include ourselves as entity in the model of the world that we run and constantly update. It is useful enough but many people will object because all advanced robots use this approach in order to interact with the environment. Nobody would convince the public that any of these robots is conscious. In the latter case is about public perception but politics is something much more real than my scribbles here.
- – -
Consciousness, free will, and our human identity
We need the illusion of the first two to make the feeling of the third one real.
- – -
Nick Bostrom | The Vulnerable World Hypothesis!
1. The technologies are not good or bad by themselves. The intended purpose could be good or bad but they can always be repurposed. What Bostrom effectively suggests is to stop technological progress or put a lid on the bowl on his terms and throw the key away.
2. Even if we accept his hypothesis of white/black balls and try to prevent civilization destruction, I don’t see any power now or in the near future with enough control to do that. On top of that we need to avoid the 1984 scenario, otherwise, what’s the point?
3. I’m not saying civilization Armageddon is not a real threat but addressing human rationality in order to find a solution is hopeless (pathetic even). I don’t know any large-scale decision in history that have been made after rational deliberation, so that would be the first. I think the only way that we have some small chance is to go through some cataclysmic event that will change humanity’s state of mind hopefully for the better.
4. AGI may offer a radical rational power but for now that is pure speculation.
- – -
Does AI understand the answers it gives us or just imitating/mimicking understanding? There are at least two sides to that:
1. Phycological – the answer must fit in sufficiently our worldview and more specifically the particular area of expertise. It must sound convincing enough for us to engage. Optionally, it would be nice if the answer is not trivial but challenges us intellectually. As stated the answer received is fulfilling only psychological needs. If you aim to feel better about yourself by having the feeling that you understand, you don’t need more than that.
To answer the question of whether AI understands the matter of discussion is a very subjective stance. The user’s level of ante facto (before the fact) understanding could vary hugely, hence the way they accept the AI answer as “understanding”.
2. Practical (scientific) – if you are gonna use the answer in some way (explicitly or implicitly) for modeling the reality. In this case, any valid method for verification/falsification of the model based on the AI response would be good for your particular inference purposes. We would call AI response understanding if the precision level of inference is close (or better) than an expert from the respective field. There is some level of subjectivity here as well – who is a genuine expert, can we always double-check with a human and compare the inference precision, etc.? Still, that criterium is much more suitable for practical and scientific purposes because of its relative objectivity. In the end how we would qualify the AI response as understanding or not is irrelevant if it works!

Categories human condition, Artificial Intelligence


There are currently no comments on this article.


Enter your comment below. Fields marked * are required. You must preview your comment before submitting it.