Lagayascienza wrote: ↑November 18th, 2024, 9:23 pm
Count Lucanor further to your question about spiders:
If what spiders do were replicated in an artificial substrate, I don’t think AGI will have been achieved in that artificial substrate. For AGI, I think you probably need conscious self-awareness.
Sentience is one thing. Conscious awareness is another. If sentience is the ability to experience feelings and sensations, then all animals must have some degree of sentience or they would be unable to negotiate their world.
However, sentience may not necessarily imply higher cognitive functions such as self-awareness, reasoning, or complex thought processes. If general intelligence (GI) is what humans have, and if our GI requires conscious awareness, reasoning, and complex thought processes, then I’d say that spiders do not have GI. And I think the reason they do not have it is because they do not have a neocortex. Therefore, for AGI, I think what goes on in the neocortex of our brains will need to be emulated in an artificial substrate.
That substrate won’t have to be a replica of a neocortex, but it will have to do what a neocortex does. So the question becomes, can the processes that occur in the neocortex be emulated to the requisite degree in an artificial substrate. I think there is reason to think they can and so I think AGI is possible.
Back to definitions. Let’s take a look at the different possibilities:
A. Intelligence is restricted to cognitive functions made possible by the existence of the human neocortex. No spectrum to consider, all other neocortical or non-neocortical functions are not intelligence.
Intelligence obviously exists on a spectrum from the least to the most intelligent. A nematode is less intelligent than a spider which I less intelligent than a rat which is less intelligent than a monkey which is less intelligent than a human. However, it is the neocortex in mammals, and particularly the relatively huge neocortex in humans, that puts humans at the most intelligent end of the spectrum. The neocortex can do things that brains without a neocortex cannot do.
Count Lucanor wrote:B. Intelligence is restricted to cognitive functions made possible by the existence of the neocortex, but there’s a spectrum across the whole variety of species within mammals. Humans represent the higher end of that spectrum, because of having the largest neocortex.
A neocortex alone would not produce our intelligence. Brains evolved gradually, in a layer-upon-layer fashion, and whatever intelligence an animal has is a whole-brain thing. Or more accurately, a whole-neural-network thing. Different animals have different neural networks, and these differences are reflected in the spectrum from minimally intelligent to most intelligent. A spider doesn’t have a neocortex but it has a level of sentience and intelligence. It can learn to a limited extent and negotiate its environment successfully and behave in ways that enable it to survive.
Count Lucanor wrote:C. Intelligence is restricted to cognitive functions made possible by the existence of the brain, regardless of the existence of a neocortex, but there’s a spectrum across the whole variety of species, that includes insects, reptiles, birds, fish, cephalopods, etc. Humans represent the higher end of that spectrum, because of having the largest neocortex.
Yes. I think this is probably the most accurate account.
Count Lucanor wrote:D. Intelligence is restricted to the intrinsic ability of living organisms to exhibit autonomous behavior to navigate the environment and procure themselves the means of survival, regardless of having a brain or not.
Yes. But this does depend on having some sort of neural network.
Count Lucanor wrote:Intelligence is then associated with agency and sometimes, with sentience. Every species has the organs and functions necessary for that purpose and they are all intelligent in relation to that ability, and it may be argued that there’s a spectrum, but also that it is only variation without lower and higher ends.
Without at least a rudimentary neural network there can be no sentience, agency or intelligence. Agency and sentience are not enough for the level of intelligence we see at the higher end of the spectrum. An amoeba has agency and sentience. And it has the ability to move around in the pursuit of food and displays evidence of associative conditioned behavior (De la Fuente, I.M., Bringas, C., Malaina, I. et al. Evidence of conditioned behavior in amoebae. Nat Commun 10, 3690 (2019)). And even the neural network in an amputated frog’s leg remains sentient – it can be stimulated which causes muscles to contract and stimulation can produce conditioned behaviour in the leg. However, an Amoeba and an amputated frogs leg do not have consciousness and only a few of the building blocks of intelligence.
The current large AIs also exhibit some components of intelligence. But that is not enough for AGI. Like simple life forms, a Roomba (autonomous vacuum cleaner) can exhibit autonomous behavior and navigate the environment and procure itself the means of survival (electricity for its battery) without having an organic brain. With its simple, artificial neural network it senses objects in its path, modifies its route in light of such obstructions, it senses when its battery is running low and seeks out its port where it can replenish its power. But it is limited in what it can do and it is inflexible – it cannot learn new things. New things would need to be programmed into it by humans. It does not have AGI. To learn on its own it would need a much more sophisticated neural network. However, the newer AIs are capable of learning.
Count Lucanor wrote:It is also possible and most likely, however, that people think that even if some of these categories exclude the others when identifying intelligence, the ones excluded are nevertheless the base from which intelligence emerges, in other words, are a necessary stage of development towards true intelligence. Basically then, any living form, whether it is an amoeba, a lizard or a chimpanzee, displays in its behavior a subvenient property of intelligence, even if it is not, strictly speaking, intelligent.
Yes, I think that’s right. Simple organisms have a minimal level of sentience and intelligence. And I think we can carry that over to very simple AIs like the Roomba. Large complex AIs such as the LLMs display a much higher level of artificial intelligence. (Although they do not yet have AGI.)
Count Lucanor wrote:Now, what is it that AI engineers have tried for decades to emulate? What they should try in the future? I believe AI has been so far about trying to achieve intelligence understood as it is described in A (human intelligence), but under two premises: one, that whenever they managed to emulate processes of living forms, they would be already on some stage of development subvenient to intelligence, out of which it eventually would emerge. That would explain the need for extending the computational (algorithmic) metaphor to basically every living process. Isn’t that interesting? Computation is the second assumption. So, it is presumed that if you find out how the spider does what it does, you have unlocked one key to intelligence, but of course, always in terms of computation.
Yes, I think attempts to emulate AGI have so far been unsuccessful. And I think that is because of the need for a greater understanding of how organic neural networks do what they do. I don’t have a problem with the term “computation”. I believe that computation is what both organic and artificial neural networks do. But organic neural network in humans is much more powerful than current AI. Artificial neural networks are not yet capable of doing everything the human brain does. In particular, artificial neural networks do not produce consciousness. They can do a limited range of things, sometimes much better than our brains do, but they do not produce the full suite of process needed to produce consciousness and AGI. However, they should be able to do this eventually once we understand the brain in more detail and the processes that occur therein. Research is happening.
However, I don’t think copying brains in miniscule detail will be necessary for AGI, just as it wasn't necessary to copy flapping wings with feathers in order to achieve heavier than air flight. It took mindless, goalless evolution billions of years to come up with bird’s wings and even then, the wings weren’t the best in terms of design. I think the same will be seen to be true for neural networks and that we will one day be able build neural networks that are super-intelligent.