Page 25 of 32

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 21st, 2024, 7:19 am
by Pattern-chaser
Belinda wrote: November 20th, 2024, 7:12 am Can an AI act against all its instincts, knowledge, reason, and memories so as to commit an act entirely out of character ,purely in order to prove it can do so?
Pattern-chaser wrote: November 20th, 2024, 10:18 am AI, current AI, has no "instincts" or "reason". It has "knowledge" and "memory" only in the sense that it has access to data, stored in databases. It has no "character" of any sort or form. So I cannot understand what you are asking.
Belinda wrote: November 20th, 2024, 12:38 pm I thought the machines could be programmed with data that approximates to instincts, reason, and memory which taken together sum up the AI machine's programmed "character" as it were. I understand that AI machines can imitate an intelligent life form so closely that it's hard to tell them apart from life intelligence. I am suggesting a test that could tell an AI intelligence apart from a living intelligence
What you have written is so specific, and involves a pretence of desired characteristics (as opposed to actually exhibiting these properties), that I am hesitant to say it isn't so. I don't *think* it is so, but someone like Steve3007, with actual present-day experience of AI development, would be able to offer a more satisfying reply.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 21st, 2024, 11:01 am
by Count Lucanor
Lagayascienza wrote: November 17th, 2024, 11:39 pm No. I think there is "computation" (broadly construed) happening in a spider's neural network but I think any intelligence would be at the lower end of the spectrum. While a spider must build a model of its world, a model which must be housed in its neural network, and although a spider exhibits goal seeking behaviours, there is little flexibility or learning ability compared to species with more complex brains. For AGI, I think there will need to be a neural network that emulates what goes in in the neocortex of more complex brains, and there will need to be some form of embodiment as per Harkins in his book.

If you have read his book yet, I'm wondering what you think of Hawkins' account of the structure and functioning of our brain and, in particular, of the neocortex?
The issue of what the neocortex is for, goes to the point of the ambiguity that persists in relation to the term “intelligence”, even in the most technical or scientific circles. It suffers from a lack of a formal, universal definition, just the same as other key concepts such as agency, sentience, etc., so we get all tangled up with semantics. Hawkins, for example, when talking about what AI should be looking for, is apparently only concerned with a definition of intelligence restricted to what the human neocortex makes possible, even though in some other parts of the book he recognizes intelligence in other animals. Some will argue that it is a matter of degrees and that, as you said, there is a spectrum, and that we find some intelligence, or at least subvenient properties of intelligence, at the lower end of it, in proportion to the animal’s neocortex. But there is quite a lot of organisms without neocortex, and we also know that completely brainless organism can exhibit complex behavior, including what appears to be learning, so where’s the intelligence housed there? And if intelligence is about brains computing, how do these brainless organisms compute?

More than semantics, the key issue is that AI technology has been reduced to a problem of computation, where the actual physics is somehow irrelevant. That’s since the father himself of the discipline, Alan Turing, completely dismissed the issue: what really happens inside a brain is unknowable and not that much important, so the point or AI will be whether you can distinguish or not the outputs of a machine from those of an intelligent being (only a human, supposedly). Rather than looking at the neuroscience, it became all about hardware and software generating automated results that resemble externally those of a human.

So here we are. It would require a revolution in the field and the AI industry to turn to the kind of approach that Hawkins proposes. And then that would bring us back to square zero, without a clue of how it will go from there.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 21st, 2024, 11:34 am
by Count Lucanor
Lagayascienza wrote: November 18th, 2024, 9:23 pm
Count Lucanor further to your question about spiders:

If what spiders do were replicated in an artificial substrate, I don’t think AGI will have been achieved in that artificial substrate. For AGI, I think you probably need conscious self-awareness.

Sentience is one thing. Conscious awareness is another. If sentience is the ability to experience feelings and sensations, then all animals must have some degree of sentience or they would be unable to negotiate their world.

However, sentience may not necessarily imply higher cognitive functions such as self-awareness, reasoning, or complex thought processes. If general intelligence (GI) is what humans have, and if our GI requires conscious awareness, reasoning, and complex thought processes, then I’d say that spiders do not have GI. And I think the reason they do not have it is because they do not have a neocortex. Therefore, for AGI, I think what goes on in the neocortex of our brains will need to be emulated in an artificial substrate.

That substrate won’t have to be a replica of a neocortex, but it will have to do what a neocortex does. So the question becomes, can the processes that occur in the neocortex be emulated to the requisite degree in an artificial substrate. I think there is reason to think they can and so I think AGI is possible.
Back to definitions. Let’s take a look at the different possibilities:

A. Intelligence is restricted to cognitive functions made possible by the existence of the human neocortex. No spectrum to consider, all other neocortical or non-neocortical functions are not intelligence.

B. Intelligence is restricted to cognitive functions made possible by the existence of the neocortex, but there’s a spectrum across the whole variety of species within mammals. Humans represent the higher end of that spectrum, because of having the largest neocortex.

C. Intelligence is restricted to cognitive functions made possible by the existence of the brain, regardless of the existence of a neocortex, but there’s a spectrum across the whole variety of species, that includes insects, reptiles, birds, fish, cephalopods, etc. Humans represent the higher end of that spectrum, because of having the largest neocortex.

D. Intelligence is restricted to the intrinsic ability of living organisms to exhibit autonomous behavior to navigate the environment and procure themselves the means of survival, regardless of having a brain or not. Intelligence is then associated with agency and some times, with sentience. Every species has the organs and functions necessary for that purpose and they are all intelligent in relation to that ability, and it may be argued that there’s a spectrum, but also that it is only variation without lower and higher ends.

It is also possible and most likely, however, that people think that even if some of these categories exclude the others when identifying intelligence, the ones excluded are nevertheless the base from which intelligence emerges, in other words, are a necessary stage of development towards true intelligence. Basically then, any living form, whether it is an amoeba, a lizard or a chimpanzee, displays in its behavior a subvenient property of intelligence, even if it is not, strictly speaking, intelligent.

Now, what is it that AI engineers have tried for decades to emulate? What they should try in the future? I believe AI has been so far about trying to achieve intelligence understood as it is described in A (human intelligence), but under two premises: one, that whenever they managed to emulate processes of living forms, they would be already on some stage of development subvenient to intelligence, out of which it eventually would emerge. That would explain the need for extending the computational (algorithmic) metaphor to basically every living process. Isn’t that interesting? Computation is the second assumption. So, it is presumed that if you find out how the spider does what it does, you have unlocked one key to intelligence, but of course, always in terms of computation.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 23rd, 2024, 12:44 am
by Lagayascienza
Count Lucanor, I am looking carefully at your definitions A., B., C. and D. and thinking about your problem with the term "computation". At the same time, I am continuing my reading in consciousness, intelligence, computation and AI. There is a lot to digest. Therefore, it will take some days for me respond to your latest post. Thanks for the stimulating discussion so far.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 23rd, 2024, 11:36 am
by The Beast
My idea of consciousness is that arises from the four functions of thinking, sensing, feeling and intuition. Consciousness is the associated virtual product. It is possible that this virtual product has a correlating mathematical landscape that could be reproduced. Under this premise, the duality of consciousness and body might be separated, and the virtual component could be free to join other matter organization under an evolving spacetime from the separation point. I heard that some AI’s might use up to a third of the energy produced by an electrical nuclear plant. However, it could be more. Under this scenario, the existence of objects might also be a virtual composition among other dissociative functions, yet to a human nothing changed, and AI is not a thing but a what. The “what” could be evaluated by an expert panel to issue degrees. A know it all if a philosophical thesis.
Separation correlates with teleportation, but a first step might be one of duplication and learning. The question of teleportation is an interesting one. A suitable object (maybe human) could have multiple entities fighting for space with the stronger one possessing the object. The possessed object might take the properties of the virtual entity. In the present environment, it is all chemical and as such a horny individual does exist in the same body as a cerebral individual yet very different persona arising from the chemical properties of the human body. What if the horny persona is duplicated? Pan? Priapus?

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 23rd, 2024, 1:11 pm
by Sculptor1
The Beast wrote: November 23rd, 2024, 11:36 am My idea of consciousness is that arises from the four functions of thinking, sensing, feeling and intuition. Consciousness is the associated virtual product. It is possible that this virtual product has a correlating mathematical landscape that could be reproduced. Under this premise, the duality of consciousness and body might be separated, and the virtual component could be free to join other matter organization under an evolving spacetime from the separation point. I heard that some AI’s might use up to a third of the energy produced by an electrical nuclear plant. However, it could be more. Under this scenario, the existence of objects might also be a virtual composition among other dissociative functions, yet to a human nothing changed, and AI is not a thing but a what. The “what” could be evaluated by an expert panel to issue degrees. A know it all if a philosophical thesis.
Separation correlates with teleportation, but a first step might be one of duplication and learning. The question of teleportation is an interesting one. A suitable object (maybe human) could have multiple entities fighting for space with the stronger one possessing the object. The possessed object might take the properties of the virtual entity. In the present environment, it is all chemical and as such a horny individual does exist in the same body as a cerebral individual yet very different persona arising from the chemical properties of the human body. What if the horny persona is duplicated? Pan? Priapus?
The text is incoherent and unscientific, mixing speculative metaphors with unsupported claims. It trivializes consciousness as mere chemical states and ignores established neuroscience and philosophy.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 23rd, 2024, 1:12 pm
by Sculptor1
The Beast wrote: November 23rd, 2024, 11:36 am My idea of consciousness is that arises from the four functions of thinking, sensing, feeling and intuition. Consciousness is the associated virtual product. It is possible that this virtual product has a correlating mathematical landscape that could be reproduced. Under this premise, the duality of consciousness and body might be separated, and the virtual component could be free to join other matter organization under an evolving spacetime from the separation point. I heard that some AI’s might use up to a third of the energy produced by an electrical nuclear plant. However, it could be more. Under this scenario, the existence of objects might also be a virtual composition among other dissociative functions, yet to a human nothing changed, and AI is not a thing but a what. The “what” could be evaluated by an expert panel to issue degrees. A know it all if a philosophical thesis.
Separation correlates with teleportation, but a first step might be one of duplication and learning. The question of teleportation is an interesting one. A suitable object (maybe human) could have multiple entities fighting for space with the stronger one possessing the object. The possessed object might take the properties of the virtual entity. In the present environment, it is all chemical and as such a horny individual does exist in the same body as a cerebral individual yet very different persona arising from the chemical properties of the human body. What if the horny persona is duplicated? Pan? Priapus?
TO be clear:
It conflates concepts like physical duplication, virtual entities, and psychological states (e.g., "horny" or "cerebral personas") without explaining the mechanisms or relevance to consciousness. The suggestion of "multiple entities fighting for space" seems metaphorical at best and unsupported by any scientific or philosophical framework. Consciousness is better understood as an emergent phenomenon of neural processes rather than a battleground of competing "entities" tied to chemical states. The idea of duplicating a "horny persona" is particularly reductive, trivializing the complexity of consciousness as merely a set of transient emotional or physiological states.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 23rd, 2024, 9:50 pm
by The Beast
That’s a lot of lip-service. Archetypal Priapus is in the set of psychological patterns corresponding to alchemical changes. As “someone” pointed out infections of prions might cause mental lapses and delusions due to the creation of trigger points. “IMO” I have written extensively about the known complexities of the four functions. However, I do agree that teleportation of few atoms does not amount to physical cloning. I never said it did. It is all mentally adorned and preserved… and so are the trigger points correlating with specific language like: “my man”

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 23rd, 2024, 11:54 pm
by Sy Borg
It's simply a matter of flexible and adaptable autonomy in pursuing activities that promotes survival or thrival.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 24th, 2024, 7:30 am
by Belinda
Pattern-chaser wrote: November 21st, 2024, 7:19 am
Belinda wrote: November 20th, 2024, 7:12 am Can an AI act against all its instincts, knowledge, reason, and memories so as to commit an act entirely out of character ,purely in order to prove it can do so?
Pattern-chaser wrote: November 20th, 2024, 10:18 am AI, current AI, has no "instincts" or "reason". It has "knowledge" and "memory" only in the sense that it has access to data, stored in databases. It has no "character" of any sort or form. So I cannot understand what you are asking.
Belinda wrote: November 20th, 2024, 12:38 pm I thought the machines could be programmed with data that approximates to instincts, reason, and memory which taken together sum up the AI machine's programmed "character" as it were. I understand that AI machines can imitate an intelligent life form so closely that it's hard to tell them apart from life intelligence. I am suggesting a test that could tell an AI intelligence apart from a living intelligence
What you have written is so specific, and involves a pretence of desired characteristics (as opposed to actually exhibiting these properties), that I am hesitant to say it isn't so. I don't *think* it is so, but someone like Steve3007, with actual present-day experience of AI development, would be able to offer a more satisfying reply.
I hope Steve3007 will reply.
As to an AI machine pretending to be a person, Isaac Azimov wrote rules for AI machines so they can't hoodwink us in important life-threatening ways.
I believe AI is a threat. Real people who resemble AI machines in their lack of autonomous freedom are also a threat to human life and human rights.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 24th, 2024, 9:48 am
by Pattern-chaser
Belinda wrote: November 24th, 2024, 7:30 am As to an AI machine pretending to be a person, Isaac Asimov wrote rules for AI machines so they can't hoodwink us in important life-threatening ways.

I believe AI is a threat. Real people who resemble AI machines in their lack of autonomous freedom are also a threat to human life and human rights.
Asimov wrote his Three Laws of Robotics to apply to (fictional! 😅) robots that were already capable of independent and autonomous action, if unconstrained by the Laws.

I believe AI *could be* a threat. For now, today, it is only an annoyance, I think.

As for real people, I think they're a separate kettle of fish. Rightly or wrongly, we judge them according to laws that only apply to fully-fledged humans.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 24th, 2024, 12:41 pm
by Belinda
I confess to being vague about AI machines as comparable with extremely unfree humans who are indocrinated.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 24th, 2024, 4:47 pm
by The Beast
Belinda wrote:I confess to being vague about AI machines as comparable with extremely unfree humans who are indocrinated.
You never know! It is a new artificial intelligence idiom from Belinda. So, to make sure of this, I need to ask if it is a slip of the tongue; a slap of the wrist, or the exposition of a mental state due to “endocrination” (from the endocrine system).

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 25th, 2024, 1:32 am
by Lagayascienza
Lagayascienza wrote: November 18th, 2024, 9:23 pm
Count Lucanor further to your question about spiders:

If what spiders do were replicated in an artificial substrate, I don’t think AGI will have been achieved in that artificial substrate. For AGI, I think you probably need conscious self-awareness.

Sentience is one thing. Conscious awareness is another. If sentience is the ability to experience feelings and sensations, then all animals must have some degree of sentience or they would be unable to negotiate their world.

However, sentience may not necessarily imply higher cognitive functions such as self-awareness, reasoning, or complex thought processes. If general intelligence (GI) is what humans have, and if our GI requires conscious awareness, reasoning, and complex thought processes, then I’d say that spiders do not have GI. And I think the reason they do not have it is because they do not have a neocortex. Therefore, for AGI, I think what goes on in the neocortex of our brains will need to be emulated in an artificial substrate.

That substrate won’t have to be a replica of a neocortex, but it will have to do what a neocortex does. So the question becomes, can the processes that occur in the neocortex be emulated to the requisite degree in an artificial substrate. I think there is reason to think they can and so I think AGI is possible.

Back to definitions. Let’s take a look at the different possibilities:

A. Intelligence is restricted to cognitive functions made possible by the existence of the human neocortex. No spectrum to consider, all other neocortical or non-neocortical functions are not intelligence.
Intelligence obviously exists on a spectrum from the least to the most intelligent. A nematode is less intelligent than a spider which I less intelligent than a rat which is less intelligent than a monkey which is less intelligent than a human. However, it is the neocortex in mammals, and particularly the relatively huge neocortex in humans, that puts humans at the most intelligent end of the spectrum. The neocortex can do things that brains without a neocortex cannot do.
Count Lucanor wrote:B. Intelligence is restricted to cognitive functions made possible by the existence of the neocortex, but there’s a spectrum across the whole variety of species within mammals. Humans represent the higher end of that spectrum, because of having the largest neocortex.
A neocortex alone would not produce our intelligence. Brains evolved gradually, in a layer-upon-layer fashion, and whatever intelligence an animal has is a whole-brain thing. Or more accurately, a whole-neural-network thing. Different animals have different neural networks, and these differences are reflected in the spectrum from minimally intelligent to most intelligent. A spider doesn’t have a neocortex but it has a level of sentience and intelligence. It can learn to a limited extent and negotiate its environment successfully and behave in ways that enable it to survive.
Count Lucanor wrote:C. Intelligence is restricted to cognitive functions made possible by the existence of the brain, regardless of the existence of a neocortex, but there’s a spectrum across the whole variety of species, that includes insects, reptiles, birds, fish, cephalopods, etc. Humans represent the higher end of that spectrum, because of having the largest neocortex.
Yes. I think this is probably the most accurate account.
Count Lucanor wrote:D. Intelligence is restricted to the intrinsic ability of living organisms to exhibit autonomous behavior to navigate the environment and procure themselves the means of survival, regardless of having a brain or not.
Yes. But this does depend on having some sort of neural network.
Count Lucanor wrote:Intelligence is then associated with agency and sometimes, with sentience. Every species has the organs and functions necessary for that purpose and they are all intelligent in relation to that ability, and it may be argued that there’s a spectrum, but also that it is only variation without lower and higher ends.
Without at least a rudimentary neural network there can be no sentience, agency or intelligence. Agency and sentience are not enough for the level of intelligence we see at the higher end of the spectrum. An amoeba has agency and sentience. And it has the ability to move around in the pursuit of food and displays evidence of associative conditioned behavior (De la Fuente, I.M., Bringas, C., Malaina, I. et al. Evidence of conditioned behavior in amoebae. Nat Commun 10, 3690 (2019)). And even the neural network in an amputated frog’s leg remains sentient – it can be stimulated which causes muscles to contract and stimulation can produce conditioned behaviour in the leg. However, an Amoeba and an amputated frogs leg do not have consciousness and only a few of the building blocks of intelligence.

The current large AIs also exhibit some components of intelligence. But that is not enough for AGI. Like simple life forms, a Roomba (autonomous vacuum cleaner) can exhibit autonomous behavior and navigate the environment and procure itself the means of survival (electricity for its battery) without having an organic brain. With its simple, artificial neural network it senses objects in its path, modifies its route in light of such obstructions, it senses when its battery is running low and seeks out its port where it can replenish its power. But it is limited in what it can do and it is inflexible – it cannot learn new things. New things would need to be programmed into it by humans. It does not have AGI. To learn on its own it would need a much more sophisticated neural network. However, the newer AIs are capable of learning.
Count Lucanor wrote:It is also possible and most likely, however, that people think that even if some of these categories exclude the others when identifying intelligence, the ones excluded are nevertheless the base from which intelligence emerges, in other words, are a necessary stage of development towards true intelligence. Basically then, any living form, whether it is an amoeba, a lizard or a chimpanzee, displays in its behavior a subvenient property of intelligence, even if it is not, strictly speaking, intelligent.
Yes, I think that’s right. Simple organisms have a minimal level of sentience and intelligence. And I think we can carry that over to very simple AIs like the Roomba. Large complex AIs such as the LLMs display a much higher level of artificial intelligence. (Although they do not yet have AGI.)
Count Lucanor wrote:Now, what is it that AI engineers have tried for decades to emulate? What they should try in the future? I believe AI has been so far about trying to achieve intelligence understood as it is described in A (human intelligence), but under two premises: one, that whenever they managed to emulate processes of living forms, they would be already on some stage of development subvenient to intelligence, out of which it eventually would emerge. That would explain the need for extending the computational (algorithmic) metaphor to basically every living process. Isn’t that interesting? Computation is the second assumption. So, it is presumed that if you find out how the spider does what it does, you have unlocked one key to intelligence, but of course, always in terms of computation.
Yes, I think attempts to emulate AGI have so far been unsuccessful. And I think that is because of the need for a greater understanding of how organic neural networks do what they do. I don’t have a problem with the term “computation”. I believe that computation is what both organic and artificial neural networks do. But organic neural network in humans is much more powerful than current AI. Artificial neural networks are not yet capable of doing everything the human brain does. In particular, artificial neural networks do not produce consciousness. They can do a limited range of things, sometimes much better than our brains do, but they do not produce the full suite of process needed to produce consciousness and AGI. However, they should be able to do this eventually once we understand the brain in more detail and the processes that occur therein. Research is happening.

However, I don’t think copying brains in miniscule detail will be necessary for AGI, just as it wasn't necessary to copy flapping wings with feathers in order to achieve heavier than air flight. It took mindless, goalless evolution billions of years to come up with bird’s wings and even then, the wings weren’t the best in terms of design. I think the same will be seen to be true for neural networks and that we will one day be able build neural networks that are super-intelligent.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: November 26th, 2024, 11:03 pm
by Lagayascienza
For those interested in this topic who want to get an understanding of how the brain works, what more we need to know about it, and how this knowledge can be applied to get from AI to AGI, I recommend the book, A brief History of Intelligence by Max Bennet. And there are also some great articles and videos on his website, abriefhistoryofintelligenceDOTcom. In the book and in the articles on the website he gives a full list of scientific references. His straightforward writing make this book is very accessible for lay readers and a pleasure to read.