Gertie wrote: ↑September 10th, 2020, 9:47 am
OK. So what does it mean to say neurons, chemicals, etc present that model they've produced to themselves?
I don't think I said (quite) that. I said that brains create a virtrual model of the organism of which it is a part, including itself, and of the environment in which it finds itself. That model becomes the subjective "me" and the external world as perceived.
The upshot here, important for AI, is that any system which can create a dynamic, virtual model of itself and its environment, constantly updated in real time, and choose its actions based on scenarios run in the model, will be "conscious."
Well that would depend on whether that recreates the necessary and sufficient conditions for experiential states to manifest, and while we know brains have them, we don't know what those conditions are. They might be substrate dependent (see for example https://en.wikipedia.org/wiki/Orchestra ... %20neurons. ).
Heh. I've read Penrose's
Emperor's New Mind. A thought-provoking book, but the theory is so speculative and so dependent upon controversial quantum theoretical phenomena that it is not likely to spur much interest any time soon. It can't be ruled out, of course, but the solution is probably much simpler.
Right. And when Dennett says we have to talk about consciousness in functional terms, he's saying he can't explain it any other way. And I think that's because of what Chalmers calls The Hard Problem, which Dennett denies exists. Or ''dissolves'' - which I suppose it does if you ignore it. How can you be a materialist which is an ontological account rooted in matter and the smaller bits of matter it's reducible to, and just ignore the biggest problem this raises re experience...
I agree. That "Hard Problem" is real, but the solution is (fairly) simple, and does not require dualism or mysticism. At the same time, some aspects of it will be permanently inexplicable --- even if we invent an AI system that passes the Turing test.
I don't find the functional approach to phenomenal consciousness satisfactory. It might or might not work to produce an experiencing machine, but it'll be by immitating certain functional features of a known experiencing system (brains), not by explaining it in the way reductionism might. Hence the problem of how to test AI for phenomenal experience - we won't know if reproducing that model making function has captured the necessary and sufficient conditions for experiencing. We might only have created a machine which is very good at mimicking experiential states, and is incapable of understanding and correctly answering questions about feelings, thinking, seeing, etc. We should still def be trying it to see what happens of course, it's a possible practical way forward.
You have to keep in mind that those questions you would ask of the "experience machine" apply just as well to humans. I can only know that you are a conscious creature, a "thinking machine," via your behavior. I have no more access to your "inner world" than I would of that machine. That is just the nature of the beast --- the subjective experience of a conscious system, biological or electronic, will be intrinsically, impenetrably private. We can only impute inner phenomena to it by inferences from its behavior.