Gertie wrote: ↑December 30th, 2022, 11:19 am
There are some obvious ways to go imo. One is to programme a computer to replicate neural connectivity, the Human Connectome Project is working on mapping the human brain but as it's the most complex thing in the universe we've encountered it's an unimaginably massive task. It potentially offers the Black Mirror scenario of down-loading your own consciousness and never dying as such. If that pans out, then presumably the mind would have the same human traits like altruism and selfishness. And potentially if you artificially augmented altruistric connectivity in there you'd get a more altruistic mind. If you bumped up the intelligence quota you'd get a more intelligent mind,etc. It would be like a designer baby, but to get an AI smarter or more altruistic than possible for a human, you'd be tinkering with the circuitry in unpredictable ways, because of the incredibly complex interactivity of the circuitry.
Another way would be to build a self-learing robot, with the ability to access and process huge amounts of information until it hit whatever threshold might exist to spark conscious experience. We'd have no way of predicting what it would be like to be such a differently 'evolved' mind. To assume it could even conceptualise itself as a 'self', a being existing independantly of the information it processes, would be a guess. Anthropomorphising such a being would be a mistake, and we might not even have the language or concepts to understand what it would be like. Nagel points out that what it is like to be a bat with sonar is unknowable to us, and here the difficulty of comparison might be beyond conceivability. Unless we somehow programme in behaviours we recognise as 'altruistic', 'willed', etc. It would be a step into the dark with no access to a light switch.
Transhumanism might be another way to go. You can imagine replacing parts of brain circuitry with enhanced silicon parts, perhaps even the whole brain. And if the lights stayed on, you'd have a human-like minded AI.
But again, remember these scenarios make the assumption that simply mimicking substrate independent functionality (complex, inter-connective information processing) would provide the necessary and sufficient conditions for consciousness. We don't know if there's something about organic electro-chemical cellular brains which is necessary for consciousness, because we don't understand the mind-body relationship. For example Penrose and Hameroff's Orch OR theory suggests microtubules in neurons play a key role, where-as Tononi and Koch's IIT theory suggests the information processing function is sufficient (possibly implying a panpsychism where current computers, toasters, daffodils, rocks and particles have some form consciousness already, we just don't recognise it because it's so dissimilar to our own).
Which, if any are on the right track? Nobody knows. The mind-body relationship has implications for the most fundamental nature of reality. Anybody who thinks they do know, doesn't grasp Chalmers' Hard Problem. We don't even know enough to be able to reliably test an AI for consciousness, we don't even know enough to know if each other are conscious - it's all inference from similarity when you get down to it.
We can divide the self into two—what we perceive, and what we are.
In this sentence, “what we perceive” can be equated to Qualia, while “what we are” can be equated to the sense of self or self-awareness.
My assumption is that altruism either originates from, or can be strengthened by, the practice of equating “we” with “what we perceive”, as “what we perceive” is the external world in the translated form of Qualia. And that does not change the fact that our Qualia is inherently of the external world.
That is to say, what if we “made” a brain that inherently perceives “itself” as whatever it perceives?
Prior to this, I may have mentioned briefly that self-awareness can be boiled down to an object perceiving itself.
So, what if we can make this theoretical artificial brain become aware of “itself” by becoming aware of its external world? I suppose in order to do that, the boundary between the artificial brain and the external world should first be weakened or even broken down. How can we do this to any kind of object to begin with?
I have said before that depending on the perspective, an object and its external world can be perceived as one and the same. But this particular perspective isn’t all that subjective. It’s more so an objective way of classifying things by their objective features. This is just a matter of seeing the object and its external world as one group of atoms, which they are, as long as we define this group of atoms to be “all atoms in existence”. In this case, the objective feature that defines the group of atoms is simply “atoms that exist”.
Thus now we embrace the fact that an object and its external world are one and the same when they are classified the same.
In order for this artificial brain to be “one” with the external world while it becomes aware of itself, so that it equates itself with the external world, the brain must become self-aware through a series of physical actions that are CAUSED by the very fact that the brain and the external world can be classified as the same existences. This is because physically, the brain is one with the external world just by existing inside the world. Now all the brain has to do is to replicate that dynamic within its own mind, which would be possible if its mind were to be created as a product of that dynamic to begin with.
Though as you can see, all of this is very theoretical and abstract. Just think of it as a food for thought.
We perceive gray and argue about whether it's black or white.