Count Lucanor wrote: ↑October 29th, 2024, 6:42 pm
Lagayascienza wrote: ↑October 29th, 2024, 5:54 pm
Yes, I do endorse the computational theory of mind. And that is because I believe it has more going for it than any of the other theories.
At least we can agree on what we fundamentally disagree with. I think the case against the computational theory of mind has been made and as for me the issue is settled.
Lagayascienza wrote: ↑October 29th, 2024, 5:54 pm
I think that consciousness and mind will be explained by science as being a result of physiological states and processes.
I don’t know if it will ever be explained, I’m sure they’ll keep trying and that’s the only way to go.
Lagayascienza wrote: ↑October 29th, 2024, 5:54 pm
However, I do not "equate" current non-biological computers with biological computers.
Sorry if I didn’t make myself clear. I meant that you’re equating them in both being computers.
Lagayascienza wrote: ↑October 29th, 2024, 5:54 pm
As I said, the two do things differently and non-biological computers are currently much more limited and are nowhere near being able to produce consciousness and mind. However, the processes the two types of computer perform are analogous. The two do things differently but they get the job done. For example they can both perform arithmetic operations effectively but they do so differently.
As I already explained, they are not the same processes. You first understand mathematical relations and then do the operations with a learned syntax. The computer does not understand anything, it simply executes the routines according to the parameters set by the programmer with understanding of math syntax.
Lagayascienza wrote: ↑October 29th, 2024, 5:54 pm
If the computational theory of mind is correct and mind is a result of physiological processes and states, then I think analogous processes and states can be achieved in a non-biological substrate and computation, however it is performed, will eventually be able to produce consciousness and mind. Quantum computing will be a game changer.
If the condition is met. I don’t think it has been met.
Lagayascienza wrote: ↑October 29th, 2024, 5:54 pm
If the computational theory of mind is wrong, then consciousness and mind will remain forever mysterious. It would mean that the consciousness and mind are the result of some sort of magic that can only occur in biological substrate – analogous processes in a non-biological substrate won’t do the job. But I am a materialist. I don't believe in magic.
No, that’s a false dilemma fallacy. There are many materialists, including myself, that will not endorse the computational theory of mind and still remain loyal to the concept of brains as physical systems, without any need to resort to dualism. Searle is among those who reject the CTM with a well-argued case against it, and he certainly does not believe in magic either.
I agree with physicist David Deutsch who writes that, “The very laws of physics imply that artificial intelligence must be possible.” He explains that Artificial General Intelligence (AGI) must be possible because of the universality of computation. “If [a computer] could run for long enough ... and had an unlimited supply of memory, its repertoire would jump from the tiny class of mathematical functions [as in a calculator] to the set of all computations that can possibly be performed by any physical object [including a biological brain]. That’s universality.”
Universality entails that “everything that the laws of physics require a physical object [such as a brain] to do, can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.” (And, perhaps, providing also that it has a sensate body with which to interact with the physical environment in which it is situated.)
And as Dreyfus says, “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". Whilst we are nowhere near to building machines of such complexity, if Deutsch, Dreyfus
et al are right, which I think they are, then artificial neural networks that produce consciousness must be possible.
For a non-biological machine to produce intelligence and behaviour comparable to that seen in humans, I think it would need to be conscious. There are several theories of consciousness, but none of them are anywhere near being the final word on the matter. The case against the computational theory of mind is far from having been made and, as a materialist, I think that non-physical theories of consciousness have nothing at all going for them. They simply lack any supporting empirical evidence whatsoever and range from the incoherent to the supernatural. As a materialist, I think a physicalist, neural theory of consciousness is the most likely to be true. If it is, the question becomes “Can networks of artificial neurons produce consciousness?” As explained above, artificial neural networks of the requisite complexity must be capable of being built and of producing AGI and consciousness.
It’s hard to see how those who say that the brain is not a computer could be right. That functioning brains “compute” is beyond question. The very word “computer” was first used to refer to people whose job it was to compute. And they computed with their brains. Those who say the brain is not a computer, and that consciousness in a non-biological substrate is impossible, will never be able to say what consciousness is if it does not emerge from processes and states in brains, and nor can they say why it is impossible to produce consciousness in artificial neural networks of the requisite complexity.
Even Searle admits that mind emerges from processes in physical brains: “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". I think that’s right. And I think progress will be made as we identify the actual relationship between the machinery in our heads and consciousness.
There are various objections to the computational theory. However, these objections can be countered. For example, the so-called “Chinese Room” thought experiment, which has attained almost religious cult status among AGI “impossibilists”, can be countered. One response to the “Chinese Room” has been that it is “the system” comprised of the man, the room, the cards etc, and not just the man, which would be doing the understanding, although, even if it were possible to perform the experiment today, it would take millions of years to get an answer to a single simple question.
There are other responses to the Searle's overall argument, which is really just a version of the problem of other minds, applied to machines. How can we determine whether they are conscious? Since it is difficult to decide if other people are "actually" thinking (which can lead to solipsism), we should not be surprised that it is difficult to answer the same question about machines.
Searle argues that the experience of consciousness cannot be detected by examining the behavior of a machine, a human being or any other animal. However, that cannot be right because, as Dennett points out, natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) cannot be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is possible and consciousness can be detected in artificial neural networks by a suitably designed Turing test – that is, by observing the behaviour and by taking seriously the self-reporting of complex artificial neural networks which will, eventually, be built.
In light of my belief in materialism, and in light of what I have said above (and at the risk of being accused of posing a false dilemma) I am bound to say that, at present, I must accept either that consciousness is a result of computation, or that it is the result of something “spooky”. I don’t believe the latter.
Any plausible account of consciousness will be a materialist, scientific account which will show consciousness is a result of physiological states and processes. If materialism is true, then how else could consciousness to be explained except by physiological processes and states? Since I believe consciousness cannot be otherwise explained, I also believe these physical processes and states must eventually be capable of being reproduced in a non-biological substrate.