Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
I agree with physicist David Deutsch who writes that, “The very laws of physics imply that artificial intelligence must be possible.” He explains that Artificial General Intelligence (AGI) must be possible because of the universality of computation. “If [a computer] could run for long enough ... and had an unlimited supply of memory, its repertoire would jump from the tiny class of mathematical functions [as in a calculator] to the set of all computations that can possibly be performed by any physical object [including a biological brain]. That’s universality.”
But again, that's Deutsch assuming that mind is computational, from which he infers that given X power of computation, an artificial mind will emerge. The problem is: the mind is not a computer and no one has shown that it is.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
Universality entails that “everything that the laws of physics require a physical object [such as a brain] to do, can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.” (And, perhaps, providing also that it has a sensate body with which to interact with the physical environment in which it is situated.)
That's an argument Searle already dealt with. It must be noted that he's willing to concede that anything, any physical system that goes through some steps, theoretically, can be simulated on a computer, not only a digital computer, but an analog computer, or even a system of cranks and pulleys with cats and pigeons, as long as states of the system can be represented in a syntactic structure, which in the case of digital computers, is the 1s and 0s. But that something can be represented syntactically, thus simulated, does not mean that it actually works
physically that way. In the worlds of Searle:
[...] syntax is not intrinsic to physics. The ascription of syntactical properties is always relative to an agent or observer who treats certain physical phenomena as syntactical
If everything is, ultimately, a digital machine that can be replicated with other machines, and if my brain is in that sense a digital machine implementing an algorithm in the same way that my stomach or the Milky Way galaxy is, then what makes up specifically for intelligence in my brain? It does not produce a fact about brain operation just by saying that the brain, as everything else, is a machine reducible to computational operations.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
And as Dreyfus says, “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". Whilst we are nowhere near to building machines of such complexity, if Deutsch, Dreyfus et al are right, which I think they are, then artificial neural networks that produce consciousness must be possible.
Following the laws of physics does not entail following the laws of computation.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
It’s hard to see how those who say that the brain is not a computer could be right. That functioning brains “compute” is beyond question. The very word “computer” was first used to refer to people whose job it was to compute. And they computed with their brains. Those who say the brain is not a computer, and that consciousness in a non-biological substrate is impossible, will never be able to say what consciousness is if it does not emerge from processes and states in brains, and nor can they say why it is impossible to produce consciousness in artificial neural networks of the requisite complexity.
All calculators, analog or digital, compute. It is beyond question that they are not functioning brains, therefore, the best that advocates of the computational theory of mind can argue is that some brains functions do require computing (an assumption that I would be willing to challenge), but even if that was conceded, it would not explain mind, consciousness, at all. Notice also that the difference between conscious computing and unconscious computing implies that it is wrong to assume that they are exactly the same process in people's brains. The people that used to compute manually were actually using language and visual tools, external to their brains, to do the task consciously, after they have understood the meaning of mathematical relations.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
Even Searle admits that mind emerges from processes in physical brains: “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". I think that’s right. And I think progress will be made as we identify the actual relationship between the machinery in our heads and consciousness.
Sure, he admits it and I admit that, too: if we ever find a system that replicates the brain functions, it will be a system obeying the laws of physics. Is it possible? Theoretically, yes. Technically achievable? We don't know yet, because no such system, that is not computational, has been researched. All research is done with computational devices under the assumptions of the computational theory of mind. And since computational systems can not solve it, we are stalemated.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
There are various objections to the computational theory. However, these objections can be countered. For example, the so-called “Chinese Room” thought experiment, which has attained almost religious cult status among AGI “impossibilists”, can be countered. One response to the “Chinese Room” has been that it is “the system” comprised of the man, the room, the cards etc, and not just the man, which would be doing the understanding, although, even if it were possible to perform the experiment today, it would take millions of years to get an answer to a single simple question.
A very poor argument, I must say. It does not address the main issue, which is that with syntactical operations, actions can be performed resembling those that a conscious agent would make, without actual agency and consciousness involved.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
There are other responses to the Searle's overall argument, which is really just a version of the problem of other minds, applied to machines. How can we determine whether they are conscious? Since it is difficult to decide if other people are "actually" thinking (which can lead to solipsism), we should not be surprised that it is difficult to answer the same question about machines.
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
Searle argues that the experience of consciousness cannot be detected by examining the behavior of a machine, a human being or any other animal. However, that cannot be right because, as Dennett points out, natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) cannot be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is possible and consciousness can be detected in artificial neural networks by a suitably designed Turing test – that is, by observing the behaviour and by taking seriously the self-reporting of complex artificial neural networks which will, eventually, be built.
Again, this is the false dilemma fallacy. No, the only path to explaining consciousness as a biological phenomenon is not "artificial neural networks detectable by a Turing test".
Lagayascienza wrote: ↑October 31st, 2024, 10:34 pm
In light of my belief in materialism, and in light of what I have said above (and at the risk of being accused of posing a false dilemma) I am bound to say that, at present, I must accept either that consciousness is a result of computation, or that it is the result of something “spooky”. I don’t believe the latter.
Any plausible account of consciousness will be a materialist, scientific account which will show consciousness is a result of physiological states and processes. If materialism is true, then how else could consciousness to be explained except by physiological processes and states? Since I believe consciousness cannot be otherwise explained, I also believe these physical processes and states must eventually be capable of being reproduced in a non-biological substrate.
It is beyond question that any solution to the problem of artificial intelligence will have to be produced with a physical system, within a materialist, scientific approach, but I find untenable the position that it can only be obtained through computational means, and by taking for granted that the biological mind is a digital computer. So, materialism holds as true, even when we reject the current AI program as a candidate for achieving real AI.