I'm running out of steam on this.
So the fact that we humans create a model of the world which includes a model of our self within it, has no apparent bearing on how experience arises. Far less complex experiencing animals probably don't create such a model. It doesn't look like a necessary condition for mental experience. And if it's not, copying the creation of that 'model maker within the model' function won't make any difference to whether an AI can experience.Well, sure it has a bearing. I think there is pretty widespread agreement among modern philosophers (hardcore naive realists excepted) that the phenomenal world, the world we experience, is a conceptual model of a hypothetical external, "noumenal" world which we can never experience directly. That experienced world is constructed of impressions --- sensations, concepts, feelings, etc. --- that are intangible, subjective, and intrinsically private, but which somehow represent, and are elicited by, states of affairs in that presumed external world ( which includes one's --- presumed --- physical body). Hence a creature which can create such a model will be conscious, by definition.
And I disagree that "less complex animals don't create such a model." I think we should assume that any animal with a nervous system complex enough to support one does create such a model. Amoebae? No. Vertebrates and even some insects? Yes --- probably. Honeybees' brains consist of about 1 million neurons --- more than enough to construct at least a rough conceptual model of their environment. And they exhibit behaviors and capabilities that not long ago were thought to be restricted to primates.
Read back, you've missed my original point. I'll repeat it. There's nothing special about a model which includes the model maker which is likely to be a necessary condition for experience. There's no reason to think an AI copying that model-maker-within-the-model feature will help enable it to experience.
A question which isn't answered is an open question. A theory which empirically unconfirmable and unfalsifiable is called a hypothesis, it's necessarily speculative. It's a What If. Do you really want to pretend it isn't?Yeah could be. It leaves you with the problem of not knowing if AI is the right type of wire.Well, that is the central issue here --- how will we ever know, other than by observing the system's behavior? Do you really want a theory that leaves that question permanently open --- that is empirically unconfirmable and unfalsifiable?
Maybe.Maybe. But to assume the observable behaviour resulting from biological stuff and processes is less likely to be coincidental/superficial than the biological stuff and processes itself would be ****-backwards imo.Well, that is not what I'm suggesting. I think that biological stuff, of a certain kind and arranged in certain ways, will produce consciousness. But also that non-biological stuff, or non-natural biological stuff will also produce consciousness, when arranged in analogous ways.
And again, the only means we have, or will ever have (given what we do know about the problem) for deciding whether the biology is critical is by observing the system's behavior. You seem to be holding out for some future "transcendental" insight into this issue. But for now, and for the foreseeable future, behavior is all we have.Just don't say behavioural tests are reliable.
A Theory of Consciousness which explained the necessary and sufficient conditions, which we could then test for.Pragmatically perhaps, but that doesn't make it reliable.What would?
Look at this way - why do we assume other humans have experiences like us?As pointed out before, your first similarity there is insufficient, and may be irrelevant.
- They are physically almost identical, and brain scans show similar responses to similar stimuli, which match similar verbal reports to ours.
- Their observable behaviour is experientally understandable to us, in that we can imagine behaving similarly in similar circs.
It's all about similarity. That's why the hope is that if we create an AI sufficiently similar to a human, it will somehow capture the necessary and sufficient conditions for experience.
It might be insufficient and irrelevant, you don't know.
The brain-dead person is also physically similar to us, but not conscious --- a judgment we make based on the lack of conscious behavior.We make that judgement because experience as we embodied humans experience it is obviously dynamic, changing moment to moment. Like a steam train in motion, not like a bee which makes honey then goes off again about its bee business. The brain stops working when we die, all those biological electrochemical processes cease. The point is AI don't have the same biological electrochemical processes.
And we can correlate brain scan information with perceptual phenomena only if it results in observable behavior. That is the only means we have of knowing --- inferring --- what perceptual phenomena is occurring (in anyone other than ourselves).And our self reports. What scans confirm is that some types of specific biological, electro-chemical activity correlate to consistent self-reports of specific types of experience by biological humans. We then reasonably assume that certain types of biological electrochemical interactions possess the necessary and sufficient conditions for experience.
Not my point. My point, which I'm repeating over and over now, is that just because observed behaviour is the only available way of testing AI, doesn't mean it's reliable. Because we don't know if the AI's substrate will capture the nec and sufficient conditions.Not stubbornness. Just because it's the best we can do doesn't mean it's reliable. We might be forced to act as if it's reliable, but we should realise that's what we're doing.Still holding out for that transcendental insight, eh?
Anyway, I'm done with just repeating this same obvious point.
Why is it so hard to just say you don't know - nobody does?