Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: December 31st, 2024, 12:42 pm
by Count Lucanor
Lagayascienza
That’s a lot of homework
, but I’m not giving up.
I wish for you and everyone else in this forum to have great end-of-year holidays and an excellent 2025.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: December 31st, 2024, 7:23 pm
by Lagayascienza
Thanks, Count Lucanor. Happy New year to you and all.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 7th, 2025, 11:43 am
by Count Lucanor
Lagayascienza
Let’s get back into definitions of consciousness and intelligence. I’m aware now that along the way some other terms have to be defined, such as life, sentience and, why not, computing. I'm pretty sure we'll also end up needing definitions of words as reasoning, abstraction, etc. We must also be aware that these definitions are just needed for convenience and trying to grasp our intuitions, because there's simply no way to tell it exactly how it is, as you would define, for example, what is an electron.
In general, to tackle the problem we are facing, I see a more promising approach in looking at the relationship between the organic configurations of living agents and the type of experiences they are capable of, including conscious states and processes associated with intelligent capabilities, in the light of what we actually observe in nature across all kingdoms. That is, I want to look at which are the enabling conditions of consciousness and then which type of experiences can be called consciousness and intelligence. To go on and agree on basic concepts then, we have to assume that at least until now, to have experiences is a faculty reserved to animated, living things. We have to leave out things like rocks, air, water, etc. Speaking theoretically, though, that doesn't mean they can't be made by humans to have experiences artificially.
So, first, what is life? It has to be anything that by its own nature, has intrinsic needs to procures its means of existence, survival and reproduction.
What is sentience? It is the ability of living things to have feelings. But what is a feeling? I would say a self-contained perception of a body state, obtained through sensation, with an assignation of value by the organism, consisting of negative or positive value, expressed as desirable or undesirable state. What is sensation then? It is the primary physical interaction between the environment and the organism, which produces stimuli that allows responses of the organism to such conditions. In sentient organisms, feelings are a way of regulating sensations to help the organism better cope with environmental challenges.
I find appropriate to differentiate between what could be called levels of organic life, which from the most basic living forms to the more complex, progressively add the corresponding experiential states. In this I’m trying to incorporate some insights from philosophical anthropologist Helmuth Plessner. So, first we have organisms without a nervous system, which are in the basic self-regulatory state. It is the state of a self-sustainable, autonomous organism, procuring its means of survival. Biochemical, deterministic interactions, drive the behavior, constrained to react to the environmental conditions, and that includes non-associative learning. Unicellular organisms and other basic microscopic forms fall within this category. Also plants, fungi, etc. All of these organisms have to sense the world to navigate through it and survive, so I would say they are all capable of sensation, although they are not sentient. No feelings at all. In advance, I could also hint that there is no consciousness, no intelligence, no experience as such. Agency here is not real, just apparent.
Then come organisms with a decentralized nervous system, mostly invertebrates, for which there is a variety of configurations and levels of complexity, from corals, anemones and jellyfish, to cephalopods. Some of these lack brains, and they can react to stimuli and learn, while not being sentient. They do not have cognitive experiences and are close to organisms with basic self-regulatory states. Some others have brains, feelings and sensations, and have what I call the experiential state: they can organize the signals from their sensory organs into categorizable patterns of sensations that change the structure and functions of the neural matter that comprises the nervous system, in order to drive the adaptive responses to the particular conditions where the organism navigates. Since this is an active process, where the structure of the organism’s nervous system and its responses are continuously updated, there’s real agency, intention, because a response is not a simple reaction, but an outward projection from the internal structure of the organism. The experiential state is what I identify with consciousness, awareness, and conveys the notion that the organism acts in accordance with a recognized pattern of inner/outer boundaries. I must suppose there's indeed a spectrum of consciousness within the type of experiences in these organisms, with some having relatively simple qualitative experiences closer to a predetermined, automatic response, while others being quite complex and qualitative rich, a true psychology, as it is the case of octopuses.
One key aspect of nervous systems is neuroplasticity, their ability to change their physical structure and functions as a response to experiential events, and sustain those changes in time. This is the basis of what we call memory, which exists in all beings with a nervous system.
Then comes the level of organic life represented by organisms with a central nervous system. There’s here also a distribution of features from the most simple to the most complex, which explains the different experiential states, including those similar to experiences of organisms with decentralized nervous systems and brains. Perhaps it is just another solution of nature, not necessarily better than those of invertebrates, just one that is adapted to other organisms which appeared later in the evolution map. But in any case, we’re also including here organisms with larger brains and other neural structures, like cortex and neocortex, so it goes from insects to reptiles, to birds and mammals, which have the largest neocortex. At this higher level of experiential state, organisms have increased their ability to sense the world as an outside boundary where the agent navigates, and to act based on the patterns constructed from sensory inputs and the changes in the structure of the nervous matter produced by associative learning activities, which implies memory. Bodily responses, feelings and sensations, along with the perceived environment, are unified in a single experience, an internal model or representation of the world, a set of patterns that forms the content of mental life and gives input to behavioral decisions. This ability increases in capacity and complexity in the mammalians brains, due to the existence of frontal and prefrontal cortex. In the most sophisticated version of this modeling of the world, thanks to the prefrontal cortex, we find the ability to internalize symbols, associate and organize them in categories (abstractions) and reproduce them in social life. The Broca's area in the frontal cortex of humans is involved in the production of speech, regulating the intentional construction of meaning (semiosis) from verbal expressions, although other non-verbal actions from the agent can also produce meaning.
While we've been concerned so far with consciousness, let's not forget that all organisms, from the most simple to the most complex, have systems that capture inputs from the environment through their senses and execute automatic processes by their nervous system, without any awareness of them. Most of the bodily functions work this way. Not only that, cognitive functions that work with consciousness are preconfigured to select only the relevant inputs, so we apprehend the world with filters, completely unconscious of them. Even though all eyes see, all ears can hear and all noses smell, under the same conditions they don't see, hear and smell the same in all groups of living beings. What this seems to point to is the non-direct relationship between neural matter and consciousness, and the most likely possibility of this function being an emergent property that arose from particular configurations of that matter in neural structures in conjunction with other tissues.
So, getting into the key definitions, consciousness will be all that you find in the continuous experiential state of organisms by association of sensory events, structured in distinct patterns, with a perceived separation between inner and outer life, although as such, it is not a purely "internal thing", an inner life separated but in contact with the body. It is the living body of the agent interacting with the environment it has come in contact with. There are different levels, depending on the nervous structures of the organism, and these levels consist of different systems of representation of the world (both the external environment and the active reflexes of the organism's own body) which constitute the structure of mental life. Can we say that insects are conscious? Nobody can tell if they have a mental life or not, but if they have, it will be at a basic level, which is not that of a cow or a dolphin. Perhaps a bee or an ant has memory, but it will not be the same quality of memory as that of a dog, an elephant or a monkey, because these incorporate more complex experiential states.
.
What is general intelligence (GI)? I would propose that it is the ability to build and retrieve concepts that represent future states of the world (including the organism's own states), as a means to predict outcomes and cope with the environment. It implies imagination as a driver of action, which generates new inputs and updates the organism's responses. In other words, GI is an ability of a living, conscious organism, to organize in time and space, through mental representations, its experience, and dynamically direct its behavior towards predicted outcomes. Specialized intelligence (SI) I believe is the application of GI to specific problems in specific environments.
Since all living organisms ultimately respond to the basic natural needs, it is very likely that the same general results are achieved through different means, so the outcomes are not necessarily produced by the same processes, but since conscious phenomena is not directly observable, we make extrapolations from behavioral patterns, which are probably not true.
I'll leave it here, for now. Next station: what does AI has to do with all of this?
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 8th, 2025, 9:47 am
by Lagayascienza
Count Lucanor wrote:Let’s get back into definitions of consciousness and intelligence. I’m aware now that along the way some other terms have to be defined, such as life, sentience and, why not, computing. I'm pretty sure we'll also end up needing definitions of words as reasoning, abstraction, etc. We must also be aware that these definitions are just needed for convenience and trying to grasp our intuitions, because there's simply no way to tell it exactly how it is, as you would define, for example, what is an electron.
Thanks, Count Lucanor for your detailed post.
I’m with you on the above para. Nothing to disagree with.
Count Lucanor wrote:In general, to tackle the problem we are facing, I see a more promising approach in looking at the relationship between the organic configurations of living agents and the type of experiences they are capable of, including conscious states and processes associated with intelligent capabilities, in the light of what we actually observe in nature across all kingdoms. That is, I want to look at which are the enabling conditions of consciousness and then which type of experiences can be called consciousness and intelligence. To go on and agree on basic concepts then, we have to assume that at least until now, to have experiences is a faculty reserved to animated, living things. We have to leave out things like rocks, air, water, etc. Speaking theoretically, though, that doesn't mean they can't be made by humans to have experiences artificially.
Ok, yes, I think we have to leave out rocks and other inanimate things. Otherwise we’d have to consider whether the whole universe is conscious which would lead us to Idealism. Then the issue of AI might never again see the light of day - too many philosophical rabbit holes to get lost in.
Count Lucanor wrote:So, first, what is life? It has to be anything that by its own nature, has intrinsic needs to procures its means of existence, survival and reproduction.
Yes, procures its means of existence, survival and reproduction. I think we can work with that definition.
Count Lucanor wrote:What is sentience? It is the ability of living things to have feelings. But what is a feeling? I would say a self-contained perception of a body state, obtained through sensation, with an assignation of value by the organism, consisting of negative or positive value, expressed as desirable or undesirable state. What is sensation then? It is the primary physical interaction between the environment and the organism, which produces stimuli that allows responses of the organism to such conditions. In sentient organisms, feelings are a way of regulating sensations to help the organism better cope with environmental challenges.
I’d might reword the underlined phrase here to include some sort of sensory array – eyes, ears and receptors for touch, hot and cold, taste and smell. It is from these receptors that signals then go via the PNS to the CNS for processing and valuation. Also, I’m not sure what you mean by “regulating sensations”. Could you unpack that for me?
Count Lucanor wrote:I find appropriate to differentiate between what could be called levels of organic life, which from the most basic living forms to the more complex, progressively add the corresponding experiential states. In this I’m trying to incorporate some insights from philosophical anthropologist Helmuth Plessner. So, first we have organisms without a nervous system, which are in the basic self-regulatory state. It is the state of a self-sustainable, autonomous organism, procuring its means of survival. Biochemical, deterministic interactions, drive the behavior, constrained to react to the environmental conditions, and that includes non-associative learning. Unicellular organisms and other basic microscopic forms fall within this category. Also plants, fungi, etc. All of these organisms have to sense the world to navigate through it and survive, so I would say they are all capable of sensation, although they are not sentient. No feelings at all. In advance, I could also hint that there is no consciousness, no intelligence, no experience as such. Agency here is not real, just apparent.
Yes, I agree with this para.
I have downloaded and started reading Plesssner’s “The Levels of Organic Life and the Human: A systematic reconstruction”.
I am completely unfamiliar with Plessner. I see that he was influenced by Husserl’s phenomenology and, from what I have read thus far, his approach is more philosophical than scientific. That’s fine but, at some point, we have to get to grips with modern neuroscience and computer science if we are going to understand intelligence and be able to consider the possibilities for AI and AGI. I don’t think pure philosophy is up to the task.
(Did you manage to take a look at the Daley paper entitled “How (and why) to think that the brain is literally a computer”? It might be useful down the track.)
Count Lucanor wrote:Then come organisms with a decentralized nervous system, mostly invertebrates, for which there is a variety of configurations and levels of complexity, from corals, anemones and jellyfish, to cephalopods. Some of these lack brains, and they can react to stimuli and learn, while not being sentient. They do not have cognitive experiences and are close to organisms with basic self-regulatory states. Some others have brains, feelings and sensations, and have what I call the experiential state: they can organize the signals from their sensory organs into categorizable patterns of sensations that change the structure and functions of the neural matter that comprises the nervous system, in order to drive the adaptive responses to the particular conditions where the organism navigates. Since this is an active process, where the structure of the organism’s nervous system and its responses are continuously updated, there’s real agency, intention, because a response is not a simple reaction, but an outward projection from the internal structure of the organism. The experiential state is what I identify with consciousness, awareness, and conveys the notion that the organism acts in accordance with a recognized pattern of inner/outer boundaries. I must suppose there's indeed a spectrum of consciousness within the type of experiences in these organisms, with some having relatively simple qualitative experiences closer to a predetermined, automatic response, while others being quite complex and qualitative rich, a true psychology, as it is the case of octopuses.
Again, I find nothing to disagree with here. Octopuses are very intelligent and a favourite animal from my scuba diving days. Their brains are extraordinary and they can complete puzzles, untie knots, open jars and toddler proof cases, and are expert escape artists from aquariums. Their intelligence stems from a completely unrelated path to human intelligence, and about two-thirds of their neurons are in their arms, not their head. A distributed brain! Fascinating!
Count Lucanor wrote:One key aspect of nervous systems is neuroplasticity, their ability to change their physical structure and functions as a response to experiential events, and sustain those changes in time. This is the basis of what we call memory, which exists in all beings with a nervous system.
Yes. Neuroplasticity seems to be very important. Something has to change at the at the level of synapses – new connections - in order to lay down memories and to learn.
Count Lucanor wrote:Then comes the level of organic life represented by organisms with a central nervous system. There’s here also a distribution of features from the most simple to the most complex, which explains the different experiential states, including those similar to experiences of organisms with decentralized nervous systems and brains. Perhaps it is just another solution of nature, not necessarily better than those of invertebrates just one that is adapted to other organisms which appeared later in the evolution map.
The evolution bilateral symmetry and directionality, with the brain at the front and the butt at the back, was very important. For one thing it facilitated steering towards and chasing down food. Max Bennet deals with this at length in his Book “A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs that made Our Brains”.
Count Lucanor wrote:But in any case, we’re also including here organisms with larger brains and other neural structures, like cortex and neocortex, so it goes from insects to reptiles, to birds and mammals, which have the largest neocortex. At this higher level of experiential state, organisms have increased their ability to sense the world as an outside boundary where the agent navigates, and to act based on the patterns constructed from sensory inputs and the changes in the structure of the nervous matter produced by associative learning activities, which implies memory. Bodily responses, feelings and sensations, along with the perceived environment, are unified in a single experience, an internal model or representation of the world, a set of patterns that forms the content of mental life and gives input to behavioral decisions. This ability increases in capacity and complexity in the mammalians brains, due to the existence of frontal and prefrontal cortex. In the most sophisticated version of this modeling of the world, thanks to the prefrontal cortex, we find the ability to internalize symbols, associate and organize them in categories (abstractions) and reproduce them in social life. The Broca's area in the frontal cortex of humans is involved in the production of speech, regulating the intentional construction of meaning (semiosis) from verbal expressions, although other non-verbal actions from the agent can also produce meaning.
Yes, as neuronal networks becomes larger and more complex we see a greater range of behaviour, more sophisticated problem solving and, presumably, a richer inner conscious life.
Count Lucanor wrote:While we've been concerned so far with consciousness, let's not forget that all organisms, from the most simple to the most complex, have systems that capture inputs from the environment through their senses and execute automatic processes by their nervous system, without any awareness of them. Most of the bodily functions work this way. Not only that, cognitive functions that work with consciousness are preconfigured to select only the relevant inputs, so we apprehend the world with filters, completely unconscious of them. Even though all eyes see, all ears can hear and all noses smell, under the same conditions they don't see, hear and smell the same in all groups of living beings. What this seems to point to is the non-direct relationship between neural matter and consciousness, and the most likely possibility of this function being an emergent property that arose from particular configurations of that matter in neural structures in conjunction with other tissues.
In natural organisms yes, neural networks need other tissues and organs to support them. However, this may not be necessary in artificial thinking machines that run on electricity without any need for organic food, ATP and metabolism, or a circulatory system and associated organs to keep fluids and O2 flowing and to remove waste.
Count Lucanor wrote:So, getting into the key definitions, consciousness will be all that you find in the continuous experiential state of organisms by association of sensory events, structured in distinct patterns, with a perceived separation between inner and outer life, although as such, it is not a purely "internal thing", an inner life separated but in contact with the body. It is the living body of the agent interacting with the environment) it has come in contact with.
Are you saying that and awareness of body interacting with the environment is a key component of consciousness? If so, then I agree.
Count Lucanor wrote:There are different levels, depending on the nervous structures of the organism, and these levels consist of different systems of representation of the world (both the external environment and the active reflexes of the organism's own body) which constitute the structure of mental life.
Yes, I think representations of the world are crucial. By “structure of mental life" I take it you are referring to these representations and their manipulation by the organism. But I’m unclear about what "reflexes" have to do with the structure of mental life?
Count Lucanor wrote:Can we say that insects are conscious? Nobody can tell if they have a mental life or not, but if they have, it will be at a basic level, which is not that of a cow or a dolphin. Perhaps a bee or an ant has memory, but it will not be the same quality of memory as that of a dog, an elephant or a monkey, because these incorporate more complex experiential states.
Yes, the higher the level of consciousness the richer, deeper, more varied will be the inner mental life of an animal, I imagine. Bees can memorize where food sources are and communicate this to other worker bees and they recognize a range of pheromones, but I expect a bee’s inner mental life doesn’t get much richer than that.
Count Lucanor wrote:What is general intelligence (GI)? I would propose that it is the ability to build and retrieve concepts that represent future states of the world (including the organism's own states), as a means to predict outcomes and cope with the environment. It implies imagination as a driver of action, which generates new inputs and updates the organism's responses. In other words, GI is an ability of a living, conscious organism, to organize in time and space, through mental representations, its experience, and dynamically direct its behavior towards predicted outcomes. Specialized intelligence (SI) I believe is the application of GI to specific problems in specific environments.
No problem with the definitions in this paragraph. I think we could just say that GI is the ability to organize mental representations in imaginary time and space and to make predictions and direct behaviour towards goals. Both Bennet and Hawkins talk about mental models and prediction. And Maley talks about representations and their manipulation.
Count Lucanor wrote:Since all living organisms ultimately respond to the basic natural needs, it is very likely that the same general results are achieved through different means, so the outcomes are not necessarily produced by the same processes, but since conscious phenomena is not directly observable, we make extrapolations from behavioral patterns, which are probably not true.
Yes. I think we have to extrapolate and, although there is no guarantee our extrapolations will be true, I think it is reasonable to do so because we cannot experience another being’s consciousness. Not even the consciousness of our conspecifics. But, given your our interactions here on the forum, I think it is reasonable for me to infer that you are a conscious and intelligent being. And, hopefully, you will make a similar assessment of me. Further, I think we can do the same with complex non-human animals - I can reasonably infer from their behaviour that my dogs have some sort of conscious inner life and a certain level of intelligence.
Count Lucanor wrote:I'll leave it here, for now. Next station: what does AI has to do with all of this?
Ok. Thanks again.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 8th, 2025, 6:09 pm
by Sy Borg
Anyone who thinks they can predict what AI will do in decades, centuries or millennia is not paying attention - not to the speed of AI change, the exponential nature of AI changes, and of Earth's history of emergence.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 8th, 2025, 8:34 pm
by Lagayascienza
True. I don' think we can predict what AI will develop into. Here on the forum, I don’t think we can do more than speculate. Rather than speculate, what some of us are trying to do is to educate ourselves on consciousness and intelligence so that we can come to some conclusions (or not) about what consciousness and intelligence actually are, and whether they are possible in an artificial substrate. To that end, we are reading the philosophy of mind and the scientific literature from neuroscience and computer science. I wish we had some neuroscientists and computer scientists on the forum to guide us.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 9th, 2025, 1:33 am
by Lagayascienza
Thanks, Sy Borg. I was unaware of that. I'll go back and read his contributions.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 9th, 2025, 10:13 am
by Pattern-chaser
Sy Borg wrote: ↑December 24th, 2024, 2:19 am
Of course AI is intelligent.
dictionary.com wrote:
intelligent
[ in-tel-i-juhnt ]
Phonetic (Standard)
IPA
adjective
having good understanding or a high mental capacity; quick to comprehend, as persons or animals:
an intelligent student.
Synonyms: bright
Antonyms: stupid
displaying or characterized by quickness of understanding, sound thought, or good judgment:
an intelligent reply.
Synonyms: smart, shrewd, discerning, apt, bright, alert, clever, astute
Antonyms: stupid
having the faculty of reasoning and understanding; possessing intelligence:
intelligent beings in outer space.
Computers. pertaining to the ability to do data processing locally; smart: Compare dumb ( def 8 ).
An intelligent terminal can edit input before transmission to a host computer.
Archaic. having understanding or knowledge (usually followed by of ).
No current AI can meet the above descriptions (except the
computer one, but an "
intelligent terminal" is at least a million times less 'intelligent' even than my phone, and doesn't apply usefully to AI).
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 23rd, 2025, 2:59 pm
by Steve3007
Lagayascienza wrote:I wish we had some neuroscientists and computer scientists on the forum to guide us.Sy Borg wrote:Steve3007 is a scientist, now doing a masters in AI. He entered the conversation here: https://www.onlinephilosophyclub.com/fo ... &start=195
Thanks for the name-check Sy, but I wouldn't say I'm a scientist. I studied and taught physics previously, and have worked as a software engineer for a long time and have just finished a masters in AI, but I'm not sure what I'm going to do with it now! Use it or lose it I guess. Must now think of AI projects to work on before I forget it all! I keep meaning to get on with some work for the SETI "Breakthrough Listen" project but other things get in the way. I think Pattern-chaser has lots of years of software engineering experience too. And didn't you used to do a bit of programming in the past?
The trouble with any fast moving field like AI is that by the time you've hit on some interesting area of work to look into, other people have beaten you to it.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 23rd, 2025, 7:09 pm
by Sy Borg
Steve3007 wrote: ↑January 23rd, 2025, 2:59 pm
Lagayascienza wrote:I wish we had some neuroscientists and computer scientists on the forum to guide us.Sy Borg wrote:Steve3007 is a scientist, now doing a masters in AI. He entered the conversation here: https://www.onlinephilosophyclub.com/fo ... &start=195
Thanks for the name-check Sy, but I wouldn't say I'm a scientist. I studied and taught physics previously, and have worked as a software engineer for a long time and have just finished a masters in AI, but I'm not sure what I'm going to do with it now! Use it or lose it I guess. Must now think of AI projects to work on before I forget it all! I keep meaning to get on with some work for the SETI "Breakthrough Listen" project but other things get in the way. I think Pattern-chaser has lots of years of software engineering experience too. And didn't you used to do a bit of programming in the past?
The trouble with any fast moving field like AI is that by the time you've hit on some interesting area of work to look into, other people have beaten you to it.
P-C's and my programming experience are not comparable, like comparing an elephant to a flea. My most relevant experience was probably in UAT, but that's all decades ago now.
According to Sabine, ChatGPT, Grok, Meta's Llama (and, I presume, the CCP's DeepSeek) are frontier AI models that are already so far ahead that it's unlikely that any new models will be able to compete. You'd need to start with a whole new paradigm that was inherently more efficient.
Maybe ask Sam, Elon or Mark for a job? :) After all, AI is going to be huge, at least as revolutionary as the internet.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 23rd, 2025, 7:14 pm
by Sy Borg
Pattern-chaser wrote: ↑January 9th, 2025, 10:13 am
Sy Borg wrote: ↑December 24th, 2024, 2:19 am
Of course AI is intelligent.
dictionary.com wrote:
intelligent
[ in-tel-i-juhnt ]
Phonetic (Standard)
IPA
adjective
having good understanding or a high mental capacity; quick to comprehend, as persons or animals:
an intelligent student.
Synonyms: bright
Antonyms: stupid
displaying or characterized by quickness of understanding, sound thought, or good judgment:
an intelligent reply.
Synonyms: smart, shrewd, discerning, apt, bright, alert, clever, astute
Antonyms: stupid
having the faculty of reasoning and understanding; possessing intelligence:
intelligent beings in outer space.
Computers. pertaining to the ability to do data processing locally; smart: Compare dumb ( def 8 ).
An intelligent terminal can edit input before transmission to a host computer.
Archaic. having understanding or knowledge (usually followed by of ).
No current AI can meet the above descriptions (except the computer one, but an "intelligent terminal" is at least a million times less 'intelligent' even than my phone, and doesn't apply usefully to AI).
I think those definitions do not capture the situation. It's said that ants and bees are the most intelligent insects. Do you think any of those definitions apply? If they are not at all intelligent, then what is that difference between them and, say, insects and fleas? Is it just that they run on more complex natural algorithms? If so, wouldn't that apply to us too, in which case, are we saying that intelligence does not actually exist?
Compare the first ever chatbots with the current leading chatbots - is the difference that the latter is more intelligent?
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 24th, 2025, 11:58 am
by Pattern-chaser
Sy Borg wrote: ↑January 23rd, 2025, 7:14 pm
I think those definitions do not capture the situation. It's said that ants and bees are the most intelligent insects. Do you think any of those definitions apply? If they are not at all intelligent, then what is that difference between them and, say, insects and fleas? Is it just that they run on more complex natural algorithms? If so, wouldn't that apply to us too, in which case, are we saying that intelligence does not actually exist?
Compare the first ever chatbots with the current leading chatbots - is the difference that the latter is more intelligent?
Doesn't this highlight our problem here? We don't *really* know what "intelligence" is.
As for your final question, I would suggest that the recent chatbots are better at *chatting* than the older ones. As for "intelligence", who knows? It depends what intelligence is...
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 24th, 2025, 1:24 pm
by Sy Borg
Pattern-chaser wrote: ↑January 24th, 2025, 11:58 am
Sy Borg wrote: ↑January 23rd, 2025, 7:14 pm
I think those definitions do not capture the situation. It's said that ants and bees are the most intelligent insects. Do you think any of those definitions apply? If they are not at all intelligent, then what is that difference between them and, say, insects and fleas? Is it just that they run on more complex natural algorithms? If so, wouldn't that apply to us too, in which case, are we saying that intelligence does not actually exist?
Compare the first ever chatbots with the current leading chatbots - is the difference that the latter is more intelligent?
Doesn't this highlight our problem here? We don't *really* know what "intelligence" is.
As for your final question, I would suggest that the recent chatbots are better at *chatting* than the older ones. As for "intelligence", who knows? It depends what intelligence is...
I think we do know what intelligence is. We know it when we encounter it. We are resistant about terming machines intelligent because it's a new phenomenon. We don't want to disappear up the backside of post-modernism to the point where nothing can be said about anything.
If new chatbots are better at chatting because they are more intelligent. They only have to chat - they don't have to be able to make you a cup of tea and form political beliefs to be intelligent. They can have a specialised intelligence. Likewise, we don't expect bees and ants to be able to engage in discourse about nuclear physics - but they are still intelligent, certainly more intelligent than beetles and fleas.
Consider the dictionary definition of "The ability to acquire, understand, and use knowledge". The "aha!" that naysayers pounce on is ... "AI does not
understand". I think it does. The way AI understands complex sentences, errors and all, and responds appropriately cannot be disregarded. In this context "understanding" does not require internality, only appropriate processing.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: January 24th, 2025, 2:33 pm
by Steve3007
Sy Borg wrote:According to Sabine, ChatGPT, Grok, Meta's Llama (and, I presume, the CCP's DeepSeek) are frontier AI models that are already so far ahead that it's unlikely that any new models will be able to compete. You'd need to start with a whole new paradigm that was inherently more efficient.
Yes, or do something with AI that those models aren't doing. For example, as I understand it, the use of AI in SETI's Breakthrough Listen Project is in sifting through the vast and continually growing quantity of radio and optical telescope data looking for patterns that look artificial but not terrestrial. Creating ANN's which aren't necessarily as complex as the cutting edge ones funded by the big cooperation but which have novel/niche applications seems like an interesting place to go.
I'm hoping, at some point, to continue working on the use of ANN's in fluid dynamics (neural networks learning how fluids move) because that's what my dissertation was about and it has applications in things like climate science. But there, as with everywhere else, if you search through the literature you'll find loads of other people doing the same thing. Which is a good thing, of course, as it's how progress is made. Just difficult, as an individual, to find a little piece of uncharted territory to explore!