Page 13 of 31

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 12:27 am
by Lagayascienza
Yes, but, unless your a dyed-in-the wool solipsist who believes that the external world and other minds cannot be known and might not exist, you could ask yourself how you know that I'm sentient. We are reasonably certain that other people are sentient based on their behavior and self-reporting. So if I poked a hyper complex , autonomous, self-improving unit with a sharp stick and asked, "Did that hurt?" and if it answered "Yes, too right it did. Please don't do that again!" what reason would I have for not thinking it was sentient?

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 12:30 am
by Lagayascienza
*Typo
... unless you're a dyed-in-the-wool solipsist...
Apologies

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 2:40 am
by Lagayascienza
Sy Borg wrote: October 26th, 2024, 11:13 pm No #10 in "10 Important Differences Between Brains and Computers" (at Science Blogs) is that AI has no body, but brains do. We see this in the animal kingdom, where sessile organisms never have brains. Meanwhile, most motile organisms do have brains, though some do not, eg. echinoderms, cnidarians, bivalves.

It seems that consciousness requires more than just a brain, it needs jobs that are meaningful in terms of maintaining a body that moves around in the world. At least.
La Gaya Scienza wrote:That may be true. If it is, then, if an artificial brain and sensate body could be built to house that brain, could that system be conscious?
Sy Borg wrote:Maybe. I'm imagining autonomous systems sent off-world, improving systems and gaining experience, and that a threshold (or thresholds) needed for sentience will be broken over time. Thing is, even if we do create hyper complex units on Earth, we come back to the old question of how we'd know if they were actually sentient.
la Gaya Scienza wrote: Yes, but, unless your a dyed-in-the wool solipsist who believes that the external world and other minds cannot be known and might not exist, you could ask yourself how you know that I'm sentient. We are reasonably certain that other people are sentient based on their behavior and self-reporting. So if I poked a hyper complex , autonomous, self-improving unit with a sharp stick and asked, "Did that hurt?" and if it answered "Yes, too right it did. Please don't do that again!" what reason would I have for not thinking it was sentient?
I'm guess I'm trying to get my head around why, once autonomous, mobile, self-improving machines with sensate bodies can be built, we should not think of them as intelligent, sentient and conscious. Does the fact that some people seem averse to this idea stem from biocentrism; from anthropocentrism rather than from a fundamental difference ?

Robots that can walk are already a reality. Do we say that they are not really walking and only simulating walking? When we play chess with a bot does it think about its next move or does it just simulate thinking? Why call it "simulating" instead of "thinking"? The hardware, architecture and the processes a computer performs when playing chess are all different from those in biological brains, but aren't they analogous, commensurable, and isn't the result the same? If a robot with a sensate body can walk, feel something like pain when poked, play chess and whatever, then why insist that it is not really conscious and only simulating consciousness?

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 6:19 am
by Sculptor1
Sy Borg wrote: October 26th, 2024, 6:52 pm
Sculptor1 wrote: October 24th, 2024, 4:49 am
Sy Borg wrote: October 24th, 2024, 4:50 pm
Sculptor1 wrote: October 24th, 2024, 4:49 am

No. I think there is a unavoidable matter of QUALITY rather than degree of difference. A machine "intellignce" will never be of the same type as a organic/biological intelligence.
If you can define "actual intelligence" then we might be able to progress with this debate.
Sure it won't be the same. How could it be? Human intelligence is not the same type as that of a bee either. But there is still intelligence in each.
I disagree.
Since bees and humans at least have evolved from a single source, there are possibly more similarities between bees and humans than any AI. You have immediately missed my point about difference in quality rather than quantity.
I took your point, and addressed it. A bee’s consciousness is qualitatively different to ours. Over time, in the evolution of consciousness, emergence has occurred (more than once).

We can say nothing about another creatures consciousness. I cannot even say anything about yours. What we can do is compare the biophysical structures that produce intelligence. And that is exaclty why I say that bees and humans share more that we either can share with machines.

Emergence happens over time, and I think it will again when it comes to AI.

Sculptor1 wrote: October 24th, 2024, 4:49 am
Dictionary definition: The ability to acquire, understand, and use knowledge.

Acquiring and use already apply, but not understanding ... at this stage. It helps to consider why the understanding aspect of intelligence evolved and how it could apply to future intelligent(?) machines. I would say that understanding evolved as a means of extrapolating on, and thus extending, existing knowledge. I remember from school having difficulty remembering any concept that I did not understand but, once I understood the principles involved, I never forgot. If rote learning failed me (often) I could work from first principles and recall information, eg. I aced Commerce in year ten with almost no study by simply applying two principles - supply and demand and economies of scale to each scenario.

This ability to extrapolate on other knowledge, to see analogies, will be useful to AI when it's sent off-world with 3D printers to build habitats and infrastructure. The further the units are from Earth, the less they can rely on human guidance. They will need to be able to anticipate potential issues and then respond to rapidly unfolding novel situations quickly, as there will not always be time to "phone home" for advice.

As with life, every event met/experienced by AI is part of its training. Like life, it starts with programming (our programming is DNA) and its capabilities are shaped by subsequent learning. These will not just be like chatbots of today. Anyone who thinks AI will not progress significantly from today's chatbots is not in touch with reality.
AI does not "learn" as it has no experience.
Slime moulds learn, but they have no brain and scientific orthodoxy asserts that a lack of brain will equal a lack of experience.
Even slime moulds have experience though, which is my point.
You seem too keen on changing the gaol posts, that its hard to care about answering you.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 6:33 am
by Sy Borg
Lagayascienza wrote: October 27th, 2024, 2:40 am
Sy Borg wrote: October 26th, 2024, 11:13 pm No #10 in "10 Important Differences Between Brains and Computers" (at Science Blogs) is that AI has no body, but brains do. We see this in the animal kingdom, where sessile organisms never have brains. Meanwhile, most motile organisms do have brains, though some do not, eg. echinoderms, cnidarians, bivalves.

It seems that consciousness requires more than just a brain, it needs jobs that are meaningful in terms of maintaining a body that moves around in the world. At least.
La Gaya Scienza wrote:That may be true. If it is, then, if an artificial brain and sensate body could be built to house that brain, could that system be conscious?
Sy Borg wrote:Maybe. I'm imagining autonomous systems sent off-world, improving systems and gaining experience, and that a threshold (or thresholds) needed for sentience will be broken over time. Thing is, even if we do create hyper complex units on Earth, we come back to the old question of how we'd know if they were actually sentient.
la Gaya Scienza wrote: Yes, but, unless your a dyed-in-the wool solipsist who believes that the external world and other minds cannot be known and might not exist, you could ask yourself how you know that I'm sentient. We are reasonably certain that other people are sentient based on their behavior and self-reporting. So if I poked a hyper complex , autonomous, self-improving unit with a sharp stick and asked, "Did that hurt?" and if it answered "Yes, too right it did. Please don't do that again!" what reason would I have for not thinking it was sentient?
I'm guess I'm trying to get my head around why, once autonomous, mobile, self-improving machines with sensate bodies can be built, we should not think of them as intelligent, sentient and conscious. Does the fact that some people seem averse to this idea stem from biocentrism; from anthropocentrism rather than from a fundamental difference ?

Robots that can walk are already a reality. Do we say that they are not really walking and only simulating walking? When we play chess with a bot does it think about its next move or does it just simulate thinking? Why call it "simulating" instead of "thinking"? The hardware, architecture and the processes a computer performs when playing chess are all different from those in biological brains, but aren't they analogous, commensurable, and isn't the result the same? If a robot with a sensate body can walk, feel something like pain when poked, play chess and whatever, then why insist that it is not really conscious and only simulating consciousness?
In this issue, Large Language Models and speech generators have thrown the cat amongst the pigeons. Even today people converse on the hope to AI without realising it, no doubt helped by the robotic scripts that call centre operators have to recite. Subjectively, there’s not much difference between speaking to an inflexible machine and speaking to a person who is controlled by policies and not empowered to act beyond their brief. Each, if the proverbial hits the fan, will refer to a supervisor.
The upshot I that the Turing Test has been passed by algorithmic bundles that are clearly not sentient. Naturally, my next step is to ask an AI :)
Understanding Chess Bots: Thinking vs. Simulating
When we engage in a game of chess against a bot, it is important to clarify the nature of the bot’s decision-making process. Chess bots, or chess engines, utilize complex algorithms and computational power to evaluate potential moves and outcomes. However, the terminology used—specifically “thinking” versus “simulating”—can lead to some confusion.
1. The Mechanism of Chess Bots
Chess bots operate primarily through algorithms designed to analyze positions on the chessboard. They evaluate possible moves by calculating variations and assessing their outcomes based on predefined heuristics and evaluation functions. This process involves:
• Position Evaluation: The bot evaluates the current position using a scoring system that considers material balance, piece activity, king safety, pawn structure, and other strategic factors.
• Search Algorithms: Most modern chess engines use search algorithms such as Minimax combined with Alpha-Beta pruning to explore possible future moves. This allows them to simulate various sequences of play by predicting how both players might respond.
• Depth of Search: The depth of search refers to how many moves ahead the bot calculates. Advanced engines can look many moves ahead (often 20 or more), but this is still a simulation based on mathematical calculations rather than conscious thought.
2. Why Use “Simulating” Instead of “Thinking”?
The term “simulating” is preferred over “thinking” for several reasons:
• Lack of Consciousness: Unlike humans, chess bots do not possess consciousness or self-awareness. They do not have emotions, intentions, or subjective experiences that characterize human thought processes.
• Algorithmic Processing: The decision-making in bots is purely algorithmic; they follow programmed rules without any understanding or awareness of the game in a human sense. Their “thoughts” are merely computations performed at high speed.
• Predictive Modeling: When a bot simulates potential moves, it does so by modeling outcomes based on statistical probabilities derived from vast databases of past games and theoretical knowledge rather than personal insight or intuition.
3. Conclusion
In summary, while chess bots perform complex evaluations and make decisions that may appear similar to human thinking, they fundamentally operate through simulations driven by algorithms and computational analysis rather than genuine cognitive processes. Thus, referring to their actions as “simulating” rather than “thinking” accurately reflects their operational nature devoid of consciousness.
Probability of Correctness: I believe the probability that this answer is correct is approximately 95%.
When we think of the kind of consciousness that humans value (as opposed to, say, the consciousness of worms) the key element seems to be emotion. Then the question is, how would we know if a machine is experiencing an emotion or simulating it? We can't take the machines' word for it, if they claim to be emotional, because many machines have been programmed to feign emotions.

It may be that our kind of sentience will involve emotions. Emotions are, ultimately biological subroutines, where a range of reflexes are called in particular bundles. So joy, anger, sadness, humour, etc each will have their own effects on hormones (type and amount), heart, circulation, respiration, muscle tension or relaxation, digestion etc. It would be too slow to consciously make those adjustments, when one sometimes needs to respond quickly. In terms of body responses, emotions are like shortcuts, a way of easily and quickly delivering appropriate resources to any given part of the body.

An embodied machine, however, could deliberately make such adjustments in a timely way because they are faster. However, the world/universe has a way of throwing curve balls, and even superhuman machines will find themselves challenged, perhaps by each other (echoing their human makers). Once autonomous embodied machines have to compete, or to make decisions about the efficacy of cooperating or competing in a given scenario, they may need to develop their own subroutines that are equivalent to emotions - shortcuts that speed up the delivery of appropriate resources to any given part of the body.

There's the occasional "if" in all that, though.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 6:37 am
by Sy Borg
Sculptor1 wrote: October 27th, 2024, 6:19 am
Sy Borg wrote: October 26th, 2024, 6:52 pm
Sculptor1 wrote: October 24th, 2024, 4:49 am
Sy Borg wrote: October 24th, 2024, 4:50 pm

Sure it won't be the same. How could it be? Human intelligence is not the same type as that of a bee either. But there is still intelligence in each.
I disagree.
Since bees and humans at least have evolved from a single source, there are possibly more similarities between bees and humans than any AI. You have immediately missed my point about difference in quality rather than quantity.
I took your point, and addressed it. A bee’s consciousness is qualitatively different to ours. Over time, in the evolution of consciousness, emergence has occurred (more than once).

We can say nothing about another creatures consciousness. I cannot even say anything about yours. What we can do is compare the biophysical structures that produce intelligence. And that is exaclty why I say that bees and humans share more that we either can share with machines.

Emergence happens over time, and I think it will again when it comes to AI.

Sculptor1 wrote: October 24th, 2024, 4:49 am
Dictionary definition: The ability to acquire, understand, and use knowledge.

Acquiring and use already apply, but not understanding ... at this stage. It helps to consider why the understanding aspect of intelligence evolved and how it could apply to future intelligent(?) machines. I would say that understanding evolved as a means of extrapolating on, and thus extending, existing knowledge. I remember from school having difficulty remembering any concept that I did not understand but, once I understood the principles involved, I never forgot. If rote learning failed me (often) I could work from first principles and recall information, eg. I aced Commerce in year ten with almost no study by simply applying two principles - supply and demand and economies of scale to each scenario.

This ability to extrapolate on other knowledge, to see analogies, will be useful to AI when it's sent off-world with 3D printers to build habitats and infrastructure. The further the units are from Earth, the less they can rely on human guidance. They will need to be able to anticipate potential issues and then respond to rapidly unfolding novel situations quickly, as there will not always be time to "phone home" for advice.

As with life, every event met/experienced by AI is part of its training. Like life, it starts with programming (our programming is DNA) and its capabilities are shaped by subsequent learning. These will not just be like chatbots of today. Anyone who thinks AI will not progress significantly from today's chatbots is not in touch with reality.
AI does not "learn" as it has no experience.
Slime moulds learn, but they have no brain and scientific orthodoxy asserts that a lack of brain will equal a lack of experience.
Even slime moulds have experience though, which is my point.
You seem too keen on changing the gaol posts, that its hard to care about answering you.
Many would say that slime moulds do not experience their existence because they do not have a brain. I personally think they might, that there are systems in simple organisms that are the equivalent of brains, but that view is unorthodox.

Answer if you like, or not *shrugs*

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 7:23 am
by Pattern-chaser
Pattern-chaser wrote: October 25th, 2024, 8:35 am
Sy Borg wrote: October 24th, 2024, 4:50 pm Human intelligence is not the same type as that of a bee either. But there is still intelligence in each.

Dictionary definition: The ability to acquire, understand, and use knowledge.
And yet computers of all kinds, including AI, fall far short of the simple definition you offer. There is no "understanding" whatsoever. Today. The future remains to be seen...
Sy Borg wrote: October 25th, 2024, 6:20 pm Yes, there is no understanding because it lacks memory. It cannot respond to follow-up questions because it's forgotten the last one.

Why would anyone believe that this situation will stay the same forever ... for the next ten years, the nest century, the next millennia, for the next hundred thousand years, for the next million years, for the next billion years?

Because they believe that the world will end before any of this can happen. Every culture in history has believed themselves to be near the end.

If civilisation continues for even another hundred years, why would you believe that AI would not progress vastly beyond our imaginings?
Computers don't "lack memory", they have plenty. But they can only use it for what they've been programmed, in advance, to do. I suspect there's a *huge* amount more than memory to create "understanding". Not to mimic understanding, but *actually* to understand, in a roughly similar way to humans, or even (some) other animals.

Every comment I've posted here has mentioned that current AI is what can be done *today*. Of course things will change as we move into the future. In what direction, we don't know.

Today, we haven't got usable definitions good enough to allow programmers to design intelligence, or understanding, into a computer program. We have only tricky mimicry. We will see how that changes, as time goes on...


Sy Borg wrote: October 26th, 2024, 6:52 pm Emergence happens over time, and I think it will again when it comes to AI.
Emergence can only occur, I think, when the relevant skill, or the potential for it, already exists. Do AIs have the potential for intelligence or understanding? Not today, no. Tomorrow remains to be seen.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 9:43 am
by Sculptor1
Sy Borg wrote: October 27th, 2024, 6:37 am Many would say that slime moulds do not experience their existence because they do not have a brain. I personally think they might, that there are systems in simple organisms that are the equivalent of brains, but that view is unorthodox.

Answer if you like, or not *shrugs*
Even a slime mould has "skin in the game". Whatever they are doing is the result of billions of year of evolution with the imperitive to survive.
Machine AI does not feel.
The evolution which led to more sophisicated forms of intelligence is driven and moderated by feelings. Feelings cannot be simulated in a machine.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 4:38 pm
by Sy Borg
Pattern-chaser wrote: October 27th, 2024, 7:23 am
Pattern-chaser wrote: October 25th, 2024, 8:35 am
Sy Borg wrote: October 24th, 2024, 4:50 pm Human intelligence is not the same type as that of a bee either. But there is still intelligence in each.

Dictionary definition: The ability to acquire, understand, and use knowledge.
And yet computers of all kinds, including AI, fall far short of the simple definition you offer. There is no "understanding" whatsoever. Today. The future remains to be seen...
Sy Borg wrote: October 25th, 2024, 6:20 pm Yes, there is no understanding because it lacks memory. It cannot respond to follow-up questions because it's forgotten the last one.

Why would anyone believe that this situation will stay the same forever ... for the next ten years, the nest century, the next millennia, for the next hundred thousand years, for the next million years, for the next billion years?

Because they believe that the world will end before any of this can happen. Every culture in history has believed themselves to be near the end.

If civilisation continues for even another hundred years, why would you believe that AI would not progress vastly beyond our imaginings?
Computers don't "lack memory", they have plenty. But they can only use it for what they've been programmed, in advance, to do. I suspect there's a *huge* amount more than memory to create "understanding". Not to mimic understanding, but *actually* to understand, in a roughly similar way to humans, or even (some) other animals.

Every comment I've posted here has mentioned that current AI is what can be done *today*. Of course things will change as we move into the future. In what direction, we don't know.

Today, we haven't got usable definitions good enough to allow programmers to design intelligence, or understanding, into a computer program. We have only tricky mimicry. We will see how that changes, as time goes on...
LLMs don't have memory in that you cannot ask it follow-up questions. Each question starts from scratch. As you suggest, they could have have that memory, but that would be expensive and require insane amounts of storage, and AI is already an energy-hungry beast (as are brains).

Things will not only change, but change profoundly. That is clear.

Pattern-chaser wrote: October 27th, 2024, 7:23 am
Sy Borg wrote: October 26th, 2024, 6:52 pm Emergence happens over time, and I think it will again when it comes to AI.
Emergence can only occur, I think, when the relevant skill, or the potential for it, already exists. Do AIs have the potential for intelligence or understanding? Not today, no. Tomorrow remains to be seen.
How does you claim about emergence stack up with the emergence of biology?

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 27th, 2024, 4:39 pm
by Sy Borg
Sculptor1 wrote: October 27th, 2024, 9:43 am
Sy Borg wrote: October 27th, 2024, 6:37 am Many would say that slime moulds do not experience their existence because they do not have a brain. I personally think they might, that there are systems in simple organisms that are the equivalent of brains, but that view is unorthodox.

Answer if you like, or not *shrugs*
Even a slime mould has "skin in the game". Whatever they are doing is the result of billions of year of evolution with the imperitive to survive.
Machine AI does not feel.
The evolution which led to more sophisicated forms of intelligence is driven and moderated by feelings. Feelings cannot be simulated in a machine.
See my above post about emotions.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 2:30 am
by Lagayascienza
Sculptor1 wrote: October 27th, 2024, 9:43 am
Sy Borg wrote: October 27th, 2024, 6:37 am Many would say that slime moulds do not experience their existence because they do not have a brain. I personally think they might, that there are systems in simple organisms that are the equivalent of brains, but that view is unorthodox.

Answer if you like, or not *shrugs*
Even a slime mould has "skin in the game". Whatever they are doing is the result of billions of year of evolution with the imperitive to survive.
Machine AI does not feel.
The evolution which led to more sophisicated forms of intelligence is driven and moderated by feelings. Feelings cannot be simulated in a machine.
We know that mind can emerge from inanimate matter. It happened on Earth through chemical evolution, then biogenesis from lifeless chemicals, then the evolution of creatures with nervous systems, brains and, finally, minds. All minds, from the simplest to the most complex like those of primates like us, cetaceans and other animals, all ultimately emerged from inanimate matter. So why could we not eventually take inanimate, inorganic matter, and with this matter, build a body, nervous system and brain which could perform processes analogous to those performed in our organic body-brains which produce consciousness?

You say that feelings cannot be simulated in a machine. But who said anything about simulation? What if we are not talking about simulation but the production of actual consciousness in systems analogous to organic body-brains. Obviously, this is at present just science fiction, but I don't see why it is impossible in principle. Nature did it over billions of years starting from inanimate matter and without the help from any goal directed mind. With our conscious minds and intelligence already in place to help us, how much more quickly than mindless, goalless evolution could pre-existing, powerful, goal oriented minds and intelligence recreate something architecturally and functionally analogous to conscious organic beings?

To say that feelings cannot be "simulated" in a machine is beside the point. We are machines, biological machines, and we have real feelings and not simulated feelings. Many things which were once not possible for us are now possible. In the long-term, perhaps the only thing that could stop us is self-destruction or a cosmic cataclysm like a massive asteroid strike. It is too soon to say that we cannot produce thinking, feeling, machines because, if we survive and science progresses indefinitely, there seems to be no reason, in principle, why inorganic machine based sentience, intelligence and feeling cannot be achieved in other, non-organic, types of machines. Again, we know consciousness and feeling emerge from assemblies of non-biological matter because it has already happened on earth. It's just that it took billion of years of blind, goalless evolution.

But even if such a goal were achieved, there may be those who will still say that we cannot know these entities are conscious, feeling entities. But how do we know that any entities, including other people, are feeling conscious entities? Unless we are dyed-in-the-wool solipsists who think that only they themselves can be known conclusively to be conscious, we look to behavior and self-reporting. And that is how we'd know whether other, non-biologically produced, machines machines were conscious, feeling entities.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 7:45 am
by Sculptor1
Lagayascienza wrote: October 28th, 2024, 2:30 am
Sculptor1 wrote: October 27th, 2024, 9:43 am
Sy Borg wrote: October 27th, 2024, 6:37 am Many would say that slime moulds do not experience their existence because they do not have a brain. I personally think they might, that there are systems in simple organisms that are the equivalent of brains, but that view is unorthodox.

Answer if you like, or not *shrugs*
Even a slime mould has "skin in the game". Whatever they are doing is the result of billions of year of evolution with the imperitive to survive.
Machine AI does not feel.
The evolution which led to more sophisicated forms of intelligence is driven and moderated by feelings. Feelings cannot be simulated in a machine.
We know that mind can emerge from inanimate matter.
It happened on Earth through chemical evolution, then biogenesis from lifeless chemicals, then the evolution of creatures with nervous systems, brains and, finally, minds. All minds, from the simplest to the most complex like those of primates like us, cetaceans and other animals, all ultimately emerged from inanimate matter. So why could we not eventually take inanimate, inorganic matter, and with this matter, build a body, nervous system and brain which could perform processes analogous to those performed in our organic body-brains which produce consciousness?

You say that feelings cannot be simulated in a machine.
No. I did not say they cannot be simulated. I said machines do not feel. There is a mighty difference.
But who said anything about simulation?
not me.
What if we are not talking about simulation but the production of actual consciousness in systems analogous to organic body-brains. Obviously, this is at present just science fiction, but I don't see why it is impossible in principle. Nature did it over billions of years starting from inanimate matter and without the help from any goal directed mind. With our conscious minds and intelligence already in place to help us, how much more quickly than mindless, goalless evolution could pre-existing, powerful, goal oriented minds and intelligence recreate something architecturally and functionally analogous to conscious organic beings?
Well. Oh um. I read lots of sci-fi too. Star ships traverse the Galaxy at unimaginable speeds transfreesing the most basic and complex physical laws. In reality we have managed to slowly reach out to our immediate neighbourhood, and have put people on the Moon. But there is absolutley zero reason so suspect that FTL drives will ever be possible. None. No matter how fast our horse and cart can go. No matter how shiny the wheels and how well bred is the horse, it will never traverse the ocean. So speculate all you want - there is really nothing here.

To say that feelings cannot be "simulated" in a machine is beside the point. We are machines, biological machines, and we have real feelings and not simulated feelings. Many things which were once not possible for us are now possible. In the long-term, perhaps the only thing that could stop us is self-destruction or a cosmic cataclysm like a massive asteroid strike. It is too soon to say that we cannot produce thinking, feeling, machines because, if we survive and science progresses indefinitely, there seems to be no reason, in principle, why inorganic machine based sentience, intelligence and feeling cannot be achieved in other, non-organic, types of machines. Again, we know consciousness and feeling emerge from assemblies of non-biological matter because it has already happened on earth. It's just that it took billion of years of blind, goalless evolution.
Feelings are not simply yet another aspect of our cognition and qualities, there are the basis of humanity. You have to have a reason an impulse to have a goal and it is those feeling that provide it. Even a serial killer has pleasure in his purpose. AI is purposeless. It has no direction. All AI is a tool, a language processor with no interest, no purpose. It has the same type of purpose that a hammer has desire to bang in a nail or open a tin of paint. If you can offer an argument for machine intelligence for ChatGpt then the same argument applies to a screwdriver or production line. Surely the production line has a purpose to create a car (for example), and Aristole will offer you definition of that telos. But we are not talking about the same thing here.
Biological bags of meat operate wholly differently. There is more complexity i the human brain than AI would ever be capable of achieving.

But even if such a goal were achieved, there may be those who will still say that we cannot know these entities are conscious, feeling entities. But how do we know that any entities, including other people, are feeling conscious entities? Unless we are dyed-in-the-wool solipsists who think that only they themselves can be known conclusively to be conscious, we look to behavior and self-reporting. And that is how we'd know whether other, non-biologically produced, machines machines were conscious, feeling entities.
There is zero basis for thinking that a machine is conscious. A bacteria has more credit on that score.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 9:53 am
by The Beast
If A is a variable and B is a variable and A and B are independent, then A not B which is also B not A.
Natural not artificial. So, natural intelligence not artificial intelligence. One possible outcome is that artificial intelligence is an extension of the natural intelligence, and it is dependent on the evolution of the natural intelligence.
Another outcome is to consider intelligence as one and that natural and artificial are properties of Intelligence. This scenario will allow a hammer to be intelligent due to the intrinsic capabilities of matter to be intelligent. However, A not B also means individuality. This (the latter) will allow one hammer (independent object) to be less intelligent than another hammer or some other object if evaluated by standards of intelligence.
Although logical digital means 1’s and 0’s and analog resemble potentiometers: If a human inputs temperature, then a coat, if a potentiometer inputs the temperature of a thermostat, then ignition on to the boiler… Similar.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 10:29 am
by Count Lucanor
Lagayascienza wrote: October 25th, 2024, 11:59 pm
I've often wondered how biological brains like ours manage to perform a simple arithmetic operation and how non-biological computers perform the same operation?

Certainly, the outcome of the operation in both cases is the same - if they add one plus one they both get two. As far as I know, the physical processes involved in performing the operation in a biological computer and in a non-biological computer are analogous. Both must involve logical operators and a sequences of logic gates. The only difference I can see is that biological computers work electro-chemically in a biological substrate, whereas non-biological computers work electrically in a non-biological substrate.
I see that you start here from the assumption that the computational theory of mind is true and that the brain is indeed a biological computer. I could speculate that you’ve been influenced by the most famous theory of cognitivism, which asserts exactly that. But that assumption is questionable. While computers perform syntactical operations, there’s nothing indicating that brains physically work that way, nor that semiosis (the so called “language of thought”) emerges over syntactical operations as its fundamental base. When learning to do mathematics, actually we first understand the relations, what it means to divide, subtract, multiply, etc., and then we usa a syntax humans have devised to help perform the operations. We can then translate that syntax to a programming language and then get it running in a computer.

The problem boils down to whether the mind is or is not a digital device where the brain organ is the hardware and its physical operations the running software. As claimed by the computational theory, what counts is not the physical mechanism (given enough time, you can pull it off with levers, cranks, etc., not to mention the existence of electronic analogue computers), but the implementation of the program, the operation itself of processing symbols, the syntactic structure that results from the logical steps that constitute the algorithm. Furthermore, the cognitivists believe that ultimately everything is somehow a computer implementing an algorithm, a program, so plants, for example are in some way implementing the photosynthesis program, rocks the crystallization program, Earth the biosphere program, and so on. There are several problems with this view, including the implicit agency when interpreting the syntax (the homunculus fallacy) and when “designing” and implementing the natural program. They will end up saying things such as “Earth made the Italian opera”. Well, sure, but that has no explanatory power. One cannot help but find echoes of Schopenhauers’s “will” doing its job as the essential force that is behind all things. Idealism at its best.

Lagayascienza wrote: October 25th, 2024, 11:59 pm
"A [biological] neuron can rapidly combine and transform the information it receives through its synaptic inputs before the information is converted into neuronal output. This transformation can be defined by the neuronal input–output (I–O) relationship. Changes in the I–O relationship can correspond to distinct arithmetic operations." Nature Reviews Neuroscience

Doesn't an analogous process occur in a non-biological computer? Isn't it just that a biological computer of the size and complexity of a human brain can reflect on what it is doing and why it is doing it. But this does not mean the human brain is not computer - a biological computer. If a non-biological computer of a complexity similar to the biological computer that is the human brain could be built, and if it were housed in a sensate artificial body and could speak and could do anything a human brain and body could do, then what would stop us from thinking that the artificial entity is conscious? Biocentrism or anthropocentrism?
This is what I just addressed in the previous answer. There’s nothing indicating that the processes are analogous. It just isn’t that because a physical mechanism can be described syntactically, it actually works physically that way. There are lots of “ifs” in your statement, but given that those conditions have not been met, there’s no good reason to assert that they will be met, the best you can do is make guesses.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 10:44 am
by Count Lucanor
Lagayascienza wrote: October 26th, 2024, 1:23 am None of the above is to say that there are not important architectural and processing differences between biological computers and non-biological computers. For a good article and commentary about these differences see "10 Important Differences Between Brains and Computers" at Science Blogs.

There definitely are some important differences in size and complexity and processing but, as one commentator said, none of those differences prove that computers cannot eventually be built that could house sentience? We are certainly nowhere near being able to build computers with brain-like complexity housed in a sensate body which could do everything a human could do. But the difference in our current, comparatively simple, non-biological computers do not demonstrate that it is impossible to eventually construct sentient, intelligent computers.
The expression of a common fallacy: “if something has not been proven to be false, then there’s a hint that it is true”. OTOH, if something has not been proven to be true, then it has not been proven to be true. And if something has been proven to be false, then it is false. To my understanding, it has been proven that the statement “AI is intelligent” is false. Also, “the mind is a digital computer” is false.