Lagayascienza wrote: ↑November 8th, 2024, 1:31 am
Count Lucanor wrote: ↑November 7th, 2024, 10:00 am
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
By “compute” I mean fundamental processes such as judging the distance between two objects or performing an arithmetic operation. You say brains cannot do this. They clearly do. And so do computers.
To compute is to perform mathematical and logical operations, using formal rules that constitute the syntax. Humans compute, in fact the firsts computers were human teams performing tedious mathematical calculations. Syntactic rules behind mathematical and logical operations, though, are a human artifice, I find very unlikely that they are hardwired in humans brains as if they were natural computers. Our intuitions and perceptions about space and time relations must be of some other nature, not intrinsically mathematical. I don’t think spiders or birds calculate distances with an internal mathematical language, either. OTOH, computers can compute just because humans have transposed their mathematical language (formal logic is sort of a mathematical language) to the machines. That’s why when the metaphor of the computational mind is used, Searle warns about the homunculus fallacy, as if a little man inside our bodies was consciously doing the math.
I don’t find it hard to believe that organisms are hardwired with at least some of their ability to “compute”. Even an unschooled child's neural network can register differences in quantity and number. “Whaaa! How come he got more pieces of candy than me?!!!” Primates, corvids and other animals can do this, too. Evolution is amazing? And if these abilities were not the result of evolution, then how did they come about?
That is not what it means to compute. And no, children do not know math before school, nor primates, nor any other animal, before they are taught. They intuitively understand the relations and later they are taught the syntax to add, subtract, etc. Machines, OTOH, are unconscious, they can’t understand the relations, anything, they just perform the syntactical procedures programmed by humans.
Lagayascienza wrote: ↑November 8th, 2024, 1:31 am
I think that as we develop from infancy we supplement what is hard-wired by evolution with a model of the world which is built through experience and learning and which is stored in memory. Accessing this learning, and registering differences from our mental model, are fundamental to the production of intelligence and conscious awareness. There is no need for an homunculus. We just need our hard-wired abilities in logic, our ability to learn and the ability of our neural network to access memory and register differences from our mental model. I think some combination like this creates intelligence and consciousness. And the ability to imagine difference is probably also important. I think that something like this model of intelligence and consciousness will turn out to be correct.
Our cognitive abilities are not computational abilities. That’s the doctrine advanced by computationalists. If you endorse computationalism, as you say you do, you’ll take that for granted. I, along with many others, do not. Why is that? I think I have explained the reasons in several posts.
Lagayascienza wrote: ↑November 8th, 2024, 1:31 am
If we are to build machines that are intelligent in the way that organisms are intelligent, and perhaps even conscious, then the machines we build will have to do things in a way that is similar to the way in which organisms do them. They will need to be constructed on similar principles. “Digitality” won’t do the trick, IMO. It will have to be something more like the processes that occur in organic neural networks.
I generally agree with that, but the point is: there is no one currently taking that path in research. They are focused on the possibilities of digital computers, thinking that algorithms is all that is required.
Lagayascienza wrote: ↑November 8th, 2024, 1:31 am
Even if organic neural networks do not compute digitally but rely on evolutionary hard-wiring, learning, and a memory-based model of the world, something analogous to computation occurs such that a judgement of distance, or an answer to an arithmetic problem, is produced. And there is no reason in principle why this process cannot be reproduced in an artificial substrate once we understand more about how organic neural networks do it. It may even need to be a synthesis of organic and inorganic architecture, but it will be possible.
There’s nothing that demonstrates or even suggests that judging distances is the result of an internal, hardwired, mathematical operation. So I would not call that “computing”.
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
I do not say that this alone makes our current computers intelligent or conscious. For that, computers would need to be more like brains. They key to making them more like brains would be to first discover in more detail how brains do what they do and then to build a machine that does what the brain does. That must be possible in principle because intelligence and consciousness do not happen by magic. They are the result of processes which occur in physical brains. There is no reason why, in principle, these processes could not occur in non-organic brains.
There is no reason either to believe why, in principle, we can make those processes to occur in non-organic devices. First we would need to figure out how intelligence actually works in living beings and then be technically able to replicate it in other entities, a problem that most likely includes replicating mechanisms of life itself.
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
Count Lucanor wrote:The problem right now seems to be that, trapped in the hype of the Turing-based computational metaphor, engineers are looking for the equivalent of flapping wings in the form of algorithms. Machines will not be intelligent that way, although as any technology, they will be instrumental to humans for implementing processes that surpass innate human abilities.
I am not trapped in a Turing-based metaphor. And nor am I obsessed with “digitality”.
But that’s irrelevant to the issue. I clearly referred to engineers in charge of developing technologies.
Lagayascienza wrote: ↑November 6th, 2024, 6:24 pm
There are researchers who I've been reading who are searching for a new metaphor and that search is now beginning to inform the literature. Anything that does not breach the laws of physics is possible. Intelligence and consciousness in organic neural networks happen in accord with the laws of physics and not by magic. Therefore, building machines in accord with the laws of physics with these capabilities must be possible in principle. I think that intelligence and consciousness in a non-organic substrate will be different in some ways to that embodied in natural organisms, but that difference will not make artificial intelligence and consciousness impossible. The “impossibilists” insist, unreasonably IMO, that it is not possible to construct intelligent conscious machines, just as creationists insist that evolution is impossible. I don’t agree with either of them. Maybe they don't want it to be possible. But that doesn't make it impossible.
I have the feeling that “impossibilists” actually refers to realists, as opposed to utopian idealists. What counts for me are the facts of the matter and we can only talk with a good level of certainty about what we have in front of us. Of course we can always speculate about future technological developments, but when that happens, in a open future scenario no speculation is any better than the others. There doesn’t seem to be a “realist speculation” in the same sense that we can predict what will happen to our sun in a billion years.