Re: On the absurd hegemony of science
Posted: September 15th, 2020, 1:15 pm
GE
I think we getting to repeating ourselves/agree to differ time?
Look at this way - why do we assume other humans have experiences like us?
- They are physically almost identical, and brain scans show similar responses to similar stimuli, which match similar verbal reports to ours.
- Their observable behaviour is experientally understandable to us, in that we can imagine behaving similarly in similar circs.
It's all about similarity. That's why the hope is that if we create an AI sufficiently similar to a human, it will somehow capture the necessary and sufficient conditions for experience.
But we can already create lots of things which have some behavioural similarities, there are machines which can be programmed to mimic behaviours like avoiding obstacles, play chess, build cars, 'communicate' with each other like we're doing now. We don't assume they have experience. If we could build a machine so good at mimicking some behaviours we couldn't tell the difference, how do we know its crossed some line into experiencing. And why would we believe similarity/mimicry of function and behaviour alone enables it to?
Good point. The unanswered question is - does that apply beyond physical technologies copying aspects of natural physical functions.
I think we getting to repeating ourselves/agree to differ time?
Right. So the fact that we humans create a model of the world which includes a model of our self within it, has no apparent bearing on how experience arises. Far less complex experiencing animals probably don't create such a model. It doesn't look like a necessary condition for mental experience. And if it's not, copying the creation of that 'model maker within the model' function won't make any difference to whether an AI can experience.Gertie wrote: ↑I'm not sure what would count as "intrinsically special," or why a system must have some intrinsically special (however understood) property to manifest consciousness.
Yesterday, 1:20 pm
True, I'm just making the point that there's nothing intrinsically special about a model which includes the model maker, which might lead to experiential states manifesting. Do you think there is?
I'm inclined to think of consciousness as a natural phenomenon that occurs predictably in complex dynamic systems of a certain type, analogously to the way a magnetic field appears around a wire carrying an electric current. It appears, or can, at a certain point when evolutionary pressures forge ever more complex organisms having ever more sophisticated tools for assuring their survival and propagation. Consciousness is a survival strategy (though how successful it will be in the long run remains to be seen).Yeah could be. It leaves you with the problem of not knowing if AI is the right type of wire.
Maybe. But to assume the observable behaviour resulting from biological stuff and processes is less likely to be coincidental/superficial than the biological stuff and processes itself would be ****-backwards imo.To clarify I don't dismiss behaviour, that is a major observable clue, it would be daft to ignore it. You made the point that we have to assume other people have mental experience too, and I'm saying we have an extra clue re other people - they are made of the same stuff and biological/chemical processes. That could be very significant, we don't know.Yes, it is a clue, but it may be coincidental and thus superficial.
The only evidence we will ever have for its importance, or lack of it, is behavior.Pragmatically perhaps, but that doesn't make it reliable.
Look at this way - why do we assume other humans have experiences like us?
- They are physically almost identical, and brain scans show similar responses to similar stimuli, which match similar verbal reports to ours.
- Their observable behaviour is experientally understandable to us, in that we can imagine behaving similarly in similar circs.
It's all about similarity. That's why the hope is that if we create an AI sufficiently similar to a human, it will somehow capture the necessary and sufficient conditions for experience.
But we can already create lots of things which have some behavioural similarities, there are machines which can be programmed to mimic behaviours like avoiding obstacles, play chess, build cars, 'communicate' with each other like we're doing now. We don't assume they have experience. If we could build a machine so good at mimicking some behaviours we couldn't tell the difference, how do we know its crossed some line into experiencing. And why would we believe similarity/mimicry of function and behaviour alone enables it to?
Many of the technologies we've devised were first observed as natural phenomena --- fire, electricity, flight, many others. We've learned to extract the physical principles involved in those phenomena and apply them artificially. E.g., we learned that heavier-than-air objects may fly from birds, but (at least after Icarus) did not assume feathers and muscles are necessary to enable it.
Good point. The unanswered question is - does that apply beyond physical technologies copying aspects of natural physical functions.
Not stubbornness. Just because it's the best we can do doesn't mean it's reliable. We might be forced to act as if it's reliable, but we should realise that's what we're doing.Where-as if we had an actual explanation which included the necessary and sufficient conditions, then we could test for those. We could make a consciousness-o-meter and not have to guess.Well, that's the problem --- there can be no such meter, because phenomenal experience is inherently, impenetrably private. Behavior is the only evidence we will ever have, and if the behavior of an AI system is indistinguishable from that of a human, then it would only be subbornness that deters us from attributing consciousness to it.
We don't know, but we have the additional physical similarity, which would turn the question around. If we're so similar physically, what difference could account for them not being?It's OK to say we don't know.Are we willing to say that about other people?
Only some. I have a list...I just want a robot servant, is that too much to ask! But we should err on the side of caution, if there's enough evidence to think they have experiential states, they should in principle have commensurate moral consideration, probably including rights. (Just keep the off switch handy).Should we install such switches on humans too, at birth?