Gertie wrote: ↑September 11th, 2020, 3:13 pm
If the model is a product of the brain, a separate thing like steam from a train, how is the brain 'aware' of its contents? Or how does the model 'present itself' to the brain? The model/product is what's made of the seeing and thinking experiencing stuff, right? So the physical brain isn't 'looking' at the experiential product like a little homunculus in a Cartesian theatre - Dennett rightly dismisses that. So how does the communication from the experiential model back to the model maker brain work, in order to take the appropriate physical action?
The model does not present itself to the brain; the brain creates the model, which embraces the brain itself (imperfectly). It is not part of the brain, strictly speaking, any more than electrical field is part of the generator that produces it. But it is not entirely separate from the brain either. There is a continuous feedback circuit between the model and the (non-conscious) portions of the brain. Those portions deliver information to the model in real time, it is processed there, possible responses analyzed and evaluated, and the results delivered back to the appropriate portions of the brain, to undertake a task, control movement of the body, respond to a threat, etc. At times non-conscious portions of the brain can override the model, and force an action not consciously chosen (such as when it forces you to sleep). We can think of that model as Descartes' homunculus --- indeed, the "Cartesian Theater" concept is regaining favor among some psychologists and neurologists. See:
https://www.psychologytoday.com/us/blog ... s-forgiven
I've also read the Crick/Koch paper mentioned in that article, and can probably find the link if you're interested.
Note that the existence of a dynamic, conceptual or "virtual" model of a system generated by that system nicely explains, unpacks, the concept of "self-awareness." So we can say, tentatively, that any system capable of doing that is
conscious.
The point re multiple realisability stands tho - if you don't have an explanation which covers basics like necessary and sufficient conditions, how do you know you're not missing something necessary which is a feature of biological brains, their chemistry and so on. Simply including the model maker in the model, and copying functional processes and dynamic complex patterns of interactions might not be enough.
How and when do we know what is enough? If the AI can pass the Turing test, do we need anything more?
You have to keep in mind that those questions you would ask of the "experience machine" apply just as well to humans. I can only know that you are a conscious creature, a "thinking machine," via your behavior. I have no more access to your "inner world" than I would of that machine. That is just the nature of the beast --- the subjective experience of a conscious system, biological or electronic, will be intrinsically, impenetrably private. We can only impute inner phenomena to it by inferences from its behavior.
Not only from behaviour, also self reports, and crucially here, inference from analogy.
I can assume that you're a conscious being not only from your observable behaviour and self-reports - the tests we can also hope to apply to AI. But also from analogy based on our physical similarity. We're made of the same observable stuff and processes, with some minor variations. So it's reasonable to assume that if I'm conscious, you are too.
Think about that. A dead person, or a brain-dead person, is also made of the same stuff, but they are not conscious. I think we'd have to conclude that if a system can pass the Turing test and exhibit behaviors characteristic of known conscious creatures (us), even if through some sort of mechanical apparatus, then they, too, are conscious, and that the physical substrate of the system is irrelevant to that capacity.