Obviously brains interact with the rest of the body, which in turn interacts with the environment in unimaginably complex ways. We could say that is how consciousness works, you need the entire biological body and everything it interacts with to create a conscious entity. It gets you nowhere nearer understanding why or how though. So some reductionism is a sensible way forward, until you hopefully arrive at the necessary and sufficient conditions for consciousness.Some reductionism, maybe; any reductionism, not really.
And on what basis, what criteria, would you choose what reduction could and could not capture the necessary and sufficient conditions for consciousness?
A systemic, holistic approach, is always necessary to understand complex systems, especially if you are trying to replicate those systems. The brain is part of the nervous system, so you cannot dismiss the rest of the nervous system to tackle the problem of consciousness. You cannot reduce consciousness to the work of neurons when there are other things going on, and when there are neurons working in processes not related to consciousness.
The thing is, even if we understood every aspect of how a whole physical human body works and how it physically interacts with the environment, Physicalism (currently at least) still wouldn't be able to explain conscious experience. Phenomenal experience wouldn't even be part of that physicalist description. (see Levine's Explanatory Gap).
So we have no idea what the necessary and sufficient conditions for conscious experience are, or if rocks and toasters and coral have it. Or how far we can reduce the components or interactions in order to retain the necessary and sufficient conditions for conscious experience.
''Experiential states are body states'' is a monist Physicalist theory of mind, not established fact. It could be right. But it still doesn't tell us whether or not other substrates are capable of conscious experience.Gertie wrote: ↑December 15th, 2024, 10:28 am The focus is on neurons because of the correlation between specific brain states and specific experiential states. Probably the biggest clue we've discovered in trying to understand how consciousness manifests.But neurons are not only in the brain, they are all over the nervous system. That includes the parasympathetic nervous system, which controls unconscious processes. Experiential states are body states.
What we do know via human reports is that specific neural interactions in humans manifest specific experiential states. Hence neural correlation is probably our biggest clue in narrowing down the nec and suff conditions.
Right.Gertie wrote: ↑December 15th, 2024, 10:28 amIt seems obvious that if we don’t know how consciousness is naturally produced, we can’t know how to produce it artificially.
It's the fact that we don't know the necessary and sufficient conditions for consciousness which results in us not knowing if AI can be conscious. And we have to rely on similarities to humans to guess whether a system can be conscious. (Eg chimps probably are, but it's less likely coral is).
It is often claimed that neural networks are the sufficient and necessary condition, the key model of how the brain works. If we can find a neural network in corals, but when comparing to ourselves, and making our best guess, we conclude that corals are less likely to be conscious, then it follows that neural networks are likely not sufficient to produce consciousness. Generative AI and LLMs are entirely based on simulated neural networks, so we can also be pretty confident on AI not having the sufficient conditions to be conscious, no matter how sophisticated the simulated neural network.A degree of sophistication and complexity might also be necessary. Or coral might be conscious. We assume it's not only because of its physical and behavioural dissimilarity to humans who we believe are conscious. We believe humans are conscious because we are ourselves, and other humans are much like usphysically and behaviourally, and tell us they are too. That's how ignorant we are. That's as far as Physicalism has gotten us, perhaps as far as it can get us (see Chalmers' Hard Problem).
That's why scientists latch on to clues like neural correlation, and the incredibly complex nature of human brains.
Gertie wrote: ↑ December 15th, 2024, 10:28 am It might be that any cell equipped in other ways than axons and dendrites could do what neurons functionally do (exchanging neurotransmitters) when interacting in …You appear to be pointing to neural connections, or any sort of network doing the function of a neural network, but I have already addressed the issue of neural networks. They are not sufficient to produce consciousness, although most likely necessary. It’s simply not true that the nature of the interactions between neurons is what gives the necessary and sufficient conditions for consciousness.
… to manifest conscious experience. It might even be that neurotransmitters aren't necessary, any matter would work, because it's the nature of the interactions themselves which are necessary and sufficient.
see above.
Gertie wrote: ↑December 15th, 2024, 10:28 am The complexity of the brain hints that complex interactions are relevant, maybe even enough. Where-as neural correlation hints that there's something special about neurons which is necessary, in certain configurations.The brain of the fruit fly has just been mapped. 140,000 neurons and 50 million connections. I don’t know if that’s complex enough, but the question is: is the fruit fly conscious? Because if not, there’s more to consciousness than neurons. Why then insist on connectionism to explain consciousness? The answer is: because you can then equate mind to computers and keep nurturing the paradigm of the computational theory of mind.
(I told you already that I don't find the computing analogy helpful, and why. Per Physicalism, 'computing information' can only be an abstract metaphor. Unless we live in a universe where 'information' is an existing 'thing in itself', which is not Physicalism as it's generally understood.)
Neural networks comprise matter in motion. We assume that a dead brain isn't conscious, so the nature of the configurations seem to be necessary too. If it's the case that coral and fruit flies aren't conscious, then it's likely that more complexity is required, which might include many of the physical features much more complex humans have as they integrate so many more complex neural subsystems.
Maybe as more complex and mobile species evolved, integrating neural subsystems and prioritising which neural interactions are required from moment to moment might hit some level of intensity/complexity which is the fuel for consciousness.
These things are worth considering and persuing, as hypotheses.
Where-as simply saying it takes a whole human interacting with her environment to manifest conscious experience is both unhelpful and unlikely imo.
So how do you justify such a 'holistic' position in the face of our ignorance? How do you know AI can't be conscious?
Really? I could build a mouse-trap without understanding its physics.Gertie wrote: ↑December 15th, 2024, 10:28 am Nobody knows. But a way to narrow down the necessary and sufficient conditions for consciousness is to build a machine using a different substrate to mimic the complex configurations of human brains. Try that similarity out. Then try to come up with a way to test if it has the necessary and sufficient conditions (noting computers can already pass the Turing Test).It would be wacky indeed trying to build a machine to replicate something you don’t understand how it works in the first place.
This isn't a wacky approach.
But that has not been the approach anyway. Ever since Turing, the biophysics of consciousness is irrelevant, what matters is how you can produce something that resembles the observable behavior of conscious (or intelligent) beings. The best shot was any algorithmic process implemented through a machine (fromMaybe. You assert something you can't know. In the face of our ignorance the most reasonable approach is to mimic what we practically can, and see what happens.
analog to digital). If you pass that (the Turing test), then you are conscious (or intelligent). This theoretical framework, however, has been shown to be flawed. No GenAI or LLM has one bit of consciousness. The hope that so called “scaling laws” would demonstrate that from bigger computational power, consciousness would emerge, is starting to fade away. GenAI and LLMs have been hitting “the wall”, as was predicted by a few skeptics a couple of years ago. Computation is simply not the path to consciousness.