Page 9 of 31
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 6:18 am
by Pattern-chaser
Sy Borg wrote: ↑October 17th, 2024, 4:10 pm
At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.
At some point, AI will be capable of self-replication. This is certain*.
Pattern-chaser wrote: ↑October 18th, 2024, 7:45 am
Not "certain", I suggest, but merely possible? For example, AIs have not yet been programmed to have the capability of replication. That they might be so programmed in the future makes your speculation possible, but not inevitable?
Sy Borg wrote: ↑October 18th, 2024, 5:03 pm
AIs are already programming better than a percentage of programmers.
1. In early 2023, OpenAI’s ChatGPT passed Google’s exam for high-level software developers.
2. Later in 2023, GitHub reported that 46% of code across all programming languages is built using Copilot, the company’s AI-powered developer tool.
3. Finally, DeepMind’s AlphaCode in its debut outperformed human programmers. When pitted against over 5,000 human participants, the AI beat 45% of expert programmers.
https://peterhdiamandis.medium.com/will ... 79a8ac4279
Do you think that AI will stop progressing, even though its progress has so far been exponential?
This is a bit like saying "Look, I can walk, and I can dress myself, so I can plan and (successfully) implement an assault on the Eiger's North Face". Without derailing into software design — the entire field — take it from me that there is a great deal more to software design than coding. The challenges you mention
are difficult for many humans, but the ability to overcome these challenges is
necessary but not
sufficient to become a competent software designer. There's a lot more to it than that.
The article you quote was 'familiar' to software designers, even before AI was invented. Every 5–10 years, someone comes up with a scheme to de-skill "programming" or "coding", so that "anyone can do it". Experienced software designers have never taken much notice, because those who make the claims have no idea of the demands software design places upon the designer, or the skills that are needed.
The skills you mention, that AI can already achieve, represent a useful aid to a human designer. They take up some of the drudgery, as they do in other areas of human life too. This is valuable and useful.
Please note that the author of the article you quote is not a software designer, he's an 'entrepreneur'. Someone whose profession teaches you to make claims for products you have not yet built, or even planned. "Fake it 'til you make it". Even if you don't have the skills to create such a product. [Which probably also means that you don't understand the skills needed to realise your imagined future products.]
As an example, imagine that AI is asked to come up with a way to stop bullying. That is an area of expertise that you have mastered. Do you think AI can or could achieve that aim? Achieve it in a way that is at least equal, and maybe better, than a human expert like you could do?
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 12:28 pm
by The Beast
The big change in AI is not the software decision making capabilities. It is in the source materials. In examining the concept of neural network, the field is wide open from ceramics to chemical reactions changing the idea of hardware. but IMO a Graph neural network might use what it is already there. Hypothesizing the intelligence of a mosquito: It is time IMO, to put aside the use of heat seeking missiles for something more constructive like the actual mosquito. Instead of blood sucking it could be an interchange of data from the output of the mosquito brain that is the edges of the neural network associated with the Graph neural network corresponding to a simulated blood sucking (data interchange). Obviously, the alteration of the mosquito purpose (what is the purpose of a mosquito?) will necessitate a model to predict the consequences. So, IMO programming the mosquito brain may necessitate the implant of mosquito brain cells programmed into the Graph neural network. It is a duality, since the mosquito will suck blood anyway but will feel the need to access the nodes (everywhere) like accommodating light poles to the neural network. Since this hypothesis is not a real example (to my knowledge), I will not venture into how the implant will affect the natural replicated mosquitoes. The actual data collected by the edges might be used in the decision-making graph neural network that unlike the mosquito might have a well-defined (corporate secret) purpose. In its evolution from a simple counter to a DNA sampler decades or centuries might come and go.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 2:59 pm
by Count Lucanor
Pattern-chaser wrote: ↑April 4th, 2024, 8:55 am
Yes! "Intelligence" has always been a misnomer, when applied to AI, and the like. Any intelligence they might display is a direct consequence of the intelligence of their design and programming, nothing more. It's the programmers who have the intelligence, not the machine.
AI, in its current state, is super-Google. As you describe.
Yes, that’s exactly right. The Musks, the Altmans and the rest of the tech lords will like you to believe their exaggerated claims about AGI (artificial general intelligence) and the alleged risk of the Singularity because the hype increases shareholder value to their companies. It’s all about the profits. That’s why OpenAI (ChatGPT) is close to calling they have “finally achieved” AGI, because that will allow them to break the contract with Microsoft.
For all those who have not been mentally hijacked by the AI hype, you might want to subscribe to Gary Marcus’ substack. He’s a prominent figure in the AI world who likes to keep the tech lords in check. There’s a very recent article from him in Fortune titled
How Elon Musk, Sam Altman, and the Silicon Valley elite manipulate the public, you don’t want to miss that one.
Meanwhile, I’m excited about the new entry in my book club, a book penned by two computer scientists, titled
AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. The book presentation looks promising:
Autor: Arvind Narayanan, Sayash Kapoor
Páginas: 357
From two of TIME's 100 Most Influential People in AI, what you need to know about AI—and how to defend yourself against bogus AI claims and products
Confused about AI and worried about what it means for your future and the future of the world? You're not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn't, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don't work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it's being built,...
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 3:25 pm
by Mo_reese
Lagayscienza wrote: ↑October 17th, 2024, 12:29 am
If/when autonomous SRSIMs become a reality, and are out “in the wild” and far enough away from us, we may lose control of them and be unable to foresee their continued “evolution”. Their “intelligence” will be different to ours, and it could develop much more quickly than ours did because they would not be hobbled by the slowness of biological evolution by natural selection and the physical limitations of biologically housed intelligence. The worry is that centuries or millennia from now, they may come back to bite us as much more powerful entities than they were when we first sent them out. Whether we’ll want to call them “life-forms” is purely academic. They'll be doing do a lot of things that life does.
As SRSIMs develop won't they require more and more energy? Where will they obtain massive amounts of energy?
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 3:44 pm
by Sy Borg
Pattern-chaser wrote: ↑October 19th, 2024, 6:18 am
Sy Borg wrote: ↑October 17th, 2024, 4:10 pm
At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.
At some point, AI will be capable of self-replication. This is certain*.
Pattern-chaser wrote: ↑October 18th, 2024, 7:45 am
Not "certain", I suggest, but merely possible? For example, AIs have not yet been programmed to have the capability of replication. That they might be so programmed in the future makes your speculation possible, but not inevitable?
Sy Borg wrote: ↑October 18th, 2024, 5:03 pm
AIs are already programming better than a percentage of programmers.
1. In early 2023, OpenAI’s ChatGPT passed Google’s exam for high-level software developers.
2. Later in 2023, GitHub reported that 46% of code across all programming languages is built using Copilot, the company’s AI-powered developer tool.
3. Finally, DeepMind’s AlphaCode in its debut outperformed human programmers. When pitted against over 5,000 human participants, the AI beat 45% of expert programmers.
https://peterhdiamandis.medium.com/will ... 79a8ac4279
Do you think that AI will stop progressing, even though its progress has so far been exponential?
This is a bit like saying "Look, I can walk, and I can dress myself, so I can plan and (successfully) implement an assault on the Eiger's North Face". Without derailing into software design — the entire field — take it from me that there is a great deal more to software design than coding. The challenges you mention are difficult for many humans, but the ability to overcome these challenges is necessary but not sufficient to become a competent software designer. There's a lot more to it than that.
The article you quote was 'familiar' to software designers, even before AI was invented. Every 5–10 years, someone comes up with a scheme to de-skill "programming" or "coding", so that "anyone can do it". Experienced software designers have never taken much notice, because those who make the claims have no idea of the demands software design places upon the designer, or the skills that are needed.
The skills you mention, that AI can already achieve, represent a useful aid to a human designer. They take up some of the drudgery, as they do in other areas of human life too. This is valuable and useful.
Please note that the author of the article you quote is not a software designer, he's an 'entrepreneur'. Someone whose profession teaches you to make claims for products you have not yet built, or even planned. "Fake it 'til you make it". Even if you don't have the skills to create such a product. [Which probably also means that you don't understand the skills needed to realise your imagined future products.]
As an example, imagine that AI is asked to come up with a way to stop bullying. That is an area of expertise that you have mastered. Do you think AI can or could achieve that aim? Achieve it in a way that is at least equal, and maybe better, than a human expert like you could do?
I never mastered the prevention of bullying. I mastered the understanding of how bullying fell through the cracks f the legal system.
My job was replaced by AI about twelve years ago. In that time, I did not receive a single call from work, pleading for help.
Remember, what AI can do to day is tiny compared with what it will be able to do in one year's time, let alone one, ten, a hundred, a thousand or ten thousand years' time.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 3:55 pm
by Sy Borg
Mo_reese wrote: ↑October 19th, 2024, 3:25 pm
Lagayscienza wrote: ↑October 17th, 2024, 12:29 am
If/when autonomous SRSIMs become a reality, and are out “in the wild” and far enough away from us, we may lose control of them and be unable to foresee their continued “evolution”. Their “intelligence” will be different to ours, and it could develop much more quickly than ours did because they would not be hobbled by the slowness of biological evolution by natural selection and the physical limitations of biologically housed intelligence. The worry is that centuries or millennia from now, they may come back to bite us as much more powerful entities than they were when we first sent them out. Whether we’ll want to call them “life-forms” is purely academic. They'll be doing do a lot of things that life does.
As SRSIMs develop won't they require more and more energy? Where will they obtain massive amounts of energy?
There won't be any anti-nuclear energy activists in space, so that will help. Energy supplies will depend on what's available on other worlds.
I don't worry about them coming back to Earth. By that time, humans may already be extinct. I am not anticipating a quick jump into sentience, despite misconceptions spread by another about this.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 19th, 2024, 6:56 pm
by Lagayascienza
Energy won't be a problem for SRSIMs. Like Earth, the trillions of planets in our galaxy have suns for solar energy. On Mars, the rovers use solar energy to get around. And there are other sources such as wind, tidal, geothermal, chemical and nuclear energy. They could construct Dyson spheres around low mass stars that would provide enormous amounts of energy for billions of years. The journey out into the galaxy could take millions of years and chances are that we will be extinct before they had time to return to Earth to cause us any problems. And if we are not extinct, scientific progress on Earth may mean we will have the means to protect ourselves from them if they were a threat.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 12:30 am
by Sy Borg
Beautiful quote from Samuel Butler is his 1872 novel, Erewhon. Lagaya, you might enjoy this:
‘There was a time, when the Earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.
Now if a human being had existed while the Earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures-possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?
‘Again. Consciousness, in anything like the present acceptation of the term, having been once a new thing – a thing, as far as we can see, subsequent even to an individual centre of action and to a reproductive system (which we see existing in plants without apparent consciousness) – why may not there arise some new phase of mind which shall be as different from all present known phases, as the mind of animals is from that of vegetables?
‘It would be absurd to attempt to define such a mental state (or whatever it may be called), inasmuch as it must be something so foreign to man that his experience can give him no help towards conceiving its nature; but surely when we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.’
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 1:41 am
by Lagayascienza
Sy Borg wrote: ↑October 20th, 2024, 12:30 am
Beautiful quote from Samuel Butler is his 1872 novel, Erewhon. Lagaya, you might enjoy this:
‘There was a time, when the Earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.
Now if a human being had existed while the Earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures-possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?
‘Again. Consciousness, in anything like the present acceptation of the term, having been once a new thing – a thing, as far as we can see, subsequent even to an individual centre of action and to a reproductive system (which we see existing in plants without apparent consciousness) – why may not there arise some new phase of mind which shall be as different from all present known phases, as the mind of animals is from that of vegetables?
‘It would be absurd to attempt to define such a mental state (or whatever it may be called), inasmuch as it must be something so foreign to man that his experience can give him no help towards conceiving its nature; but surely when we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.’
Yes, I hadn't seen that before. Thanks for posting it. Amazing that it was written so long ago, when much of what we now know was still unknown. At present, we only have a sample size of 1 but that does not mean Earth is the only speck in the universe on which sentience and intelligence have developed. And it does not mean there are not different ways in which different types of intelligence might evolve - types which we currently have not even imagined. And, for all we know, ETSRSIMs might already be out there developing among the trillions of planets in our galaxy alone.
We once thought that Earth was the center of the universe and that the planets were just other stars that circled Earth. Then we realized that they were planets that circled the sun just like Earth. Then we realized that other stars were actually suns just like ours. Then we started discovering planets around other stars - many of them in zones where something like life as we know it could have evolved. And, possibly, where life as we cannot even imagine it might have developed. I think a biocentric view of life, sentience and intelligence will eventually go the way of a geocentric view of the universe. The notion that we are the be-all and end-all at the center of everything belongs in the past.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 2:16 am
by Sy Borg
Lagayscienza wrote: ↑October 20th, 2024, 1:41 am
Sy Borg wrote: ↑October 20th, 2024, 12:30 am
Beautiful quote from Samuel Butler is his 1872 novel, Erewhon. Lagaya, you might enjoy this:
‘There was a time, when the Earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.
Now if a human being had existed while the Earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures-possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?
‘Again. Consciousness, in anything like the present acceptation of the term, having been once a new thing – a thing, as far as we can see, subsequent even to an individual centre of action and to a reproductive system (which we see existing in plants without apparent consciousness) – why may not there arise some new phase of mind which shall be as different from all present known phases, as the mind of animals is from that of vegetables?
‘It would be absurd to attempt to define such a mental state (or whatever it may be called), inasmuch as it must be something so foreign to man that his experience can give him no help towards conceiving its nature; but surely when we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.’
Yes, I hadn't seen that before. Thanks for posting it. Amazing that it was written so long ago, when much of what we now know was still unknown. At present, we only have a sample size of 1 but that does not mean Earth is the only speck in the universe on which sentience and intelligence have developed. And it does not mean there are not different ways in which different types of intelligence might evolve - types which we currently have not even imagined. And, for all we know, ETSRSIMs might already be out there developing among the trillions of planets in our galaxy alone.
We once thought that Earth was the center of the universe and that the planets were just other stars that circled Earth. Then we realized that they were planets that circled the sun just like Earth. Then we realized that other stars were actually suns just like ours. Then we started discovering planets around other stars - many of them in zones where something like life as we know it could have evolved. And, possibly, where life as we cannot even imagine it might have developed. I think a biocentric view of life, sentience and intelligence will eventually go the way of a geocentric view of the universe. The notion that we are the be-all and end-all at the center of everything belongs in the past.
It wasn't so long after Darwin's Origin of Species, so people were probably thinking hard about evolution and its potentials. Now it's just par of the intellectual furniture and is perhaps taken for granted. It's easy to take as a give that the Earth was once a sphere of molten rock, complete with waves of lava after the Theia collision. Imagining life and sentience emerging on such a world would seem as absurd as imagining future intelligent life on Venus.
I also find the idea absurd that humans are as advanced as it gets on Earth, that it's impossible for life to be any more than incrementally more sapient than us. I suspect that many people simply assume that we'll soon all die out, that we live in The End of Days.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 3:30 am
by Lagayascienza
Yes, Darwin enable us to think about life differently. It will only take the discovery of a single microbe on, say, Europa or Enceladus to demolish the idea that we on Earth are where all the action is. I'm pretty certain that will happen one day in the not too distant future. I'm only sorry I probably won't be around to see it. And, once we have a sample of 2, and especially if that 2nd sample is of a type of life very different to what we know on Earth, a serious rethink of what life is will be needed. Maybe carbon based life isn't all there is.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 8:09 am
by Pattern-chaser
Sy Borg wrote: ↑October 19th, 2024, 3:44 pm
Remember, what AI can do to day is tiny compared with what it will be able to do in one year's time, let alone one, ten, a hundred, a thousand or ten thousand years' time.
For the present, AI is wholly and solely under the control of humans. AI cannot change or develop. Human AI programmers, however, can, and probably will, 'develop' the capabilities of AI in all kinds of ways. If AI is to evolve into SkyNet, or something like it, there are many steps to be taken first.
AI cannot approach self-awareness or sentience without human complicity. And for as long as the present situation remains, this will also remain. Without some sort of 'sentience',
self-direction is impossible. If humans give AIs the ability of self-control, self-modification, then all bets are off. But there is a long way to go before that becomes possible and practical, in the real world. We can't just 'let them off the leash', because the AIs could do nothing even if we did. We would have to add a lot more than is currently there.
At present, AI is little more than Super-Google. But who knows what will happen? As you say, none of us know what the future holds...
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 9:12 am
by Gertie
''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.
We think of ''sentience'' as something humans and some other animal species have, because that's what we encounter and identify as presumably sentient beings (relying on physiological and behavioural similarity to ourselves).
The key thing we're identifying regarding sentience is having phenomenal conscious experience. Nagel pithily puts it as ''there is something it is like'' to be gertie or any other consciously experiencing being.
The question then is - Is it possible for there to be ''something it is like'' to be an AI. And would it be like being a human.
The answer is we don't know. Because we don't know what the necessary and sufficient conditions are for phenomenal experience.
When it comes to considering conscious AI, it might be that -
- Only particular types of functioning substrate (eg biological human bodies) possess certain the necessary and sufficient conditions, which computers or replicating resin doesn't have. (Hence non-biological AI is impossible).
- Or it might be that any matter interacting in certain ways is all it takes for phenomenal experience to manifest. (The computational theory of mind relies on this assumption).
- Or there might be something else going on we don't, perhaps can't, understand.
As things stand, we don't even know how we could go about knowing which of these is correct.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 9:40 am
by Pattern-chaser
Gertie wrote: ↑October 20th, 2024, 9:12 am
''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.
We think of ''sentience'' as something humans and some other animal species have, because that's what we encounter and identify as presumably sentient beings (relying on physiological and behavioural similarity to ourselves).
The key thing we're identifying regarding sentience is having phenomenal conscious experience. Nagel pithily puts it as ''there is something it is like'' to be gertie or any other consciously experiencing being.
The question then is - Is it possible for there to be ''something it is like'' to be an AI. And would it be like being a human.
The answer is we don't know. Because we don't know what the necessary and sufficient conditions are for phenomenal experience.
When it comes to considering conscious AI, it might be that -
- Only particular types of functioning substrate (eg biological human bodies) possess certain the necessary and sufficient conditions, which computers or replicating resin doesn't have. (Hence non-biological AI is impossible).
- Or it might be that any matter interacting in certain ways is all it takes for phenomenal experience to manifest. (The computational theory of mind relies on this assumption).
- Or there might be something else going on we don't, perhaps can't, understand.
As things stand, we don't even know how we could go about knowing which of these is correct.
A thoughtful post. This subject offers much to think about.
But your first sentence stands apart from the rest.
''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.
It's the definition of "intelligence" that holds back our conversation. We all know what it is — roughly. But, like many other things, when we try to nail it down, we find we can't. Or at least, we find it extremely difficult and demanding. I think this may be why the development of AI has proved to be so challenging. We started trying to create "artificial intelligence" without a precise definition of what it is. And that's the problem.
To design any computer program, of any sort, we first need a requirements specification, a clear and precise description of what it is that we want to create. An aim. Without that, we don't know what to work toward, or whether or not we've got there yet. So yes, a useful and usable definition of intelligence is of paramount importance.
So, what is "intelligence"?
Wikipedia wrote:
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
[...]
The page goes on for a while, but probably only touches the surface of what intelligence is. I imagine books have been written on the subject, probably many books. And I also imagine they offer as many perspectives as there are books and authors on the subject.
So where do we go from here?
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 20th, 2024, 10:12 am
by Gertie
Pattern-chaser wrote: ↑October 20th, 2024, 9:40 am
Gertie wrote: ↑October 20th, 2024, 9:12 am
''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.
We think of ''sentience'' as something humans and some other animal species have, because that's what we encounter and identify as presumably sentient beings (relying on physiological and behavioural similarity to ourselves).
The key thing we're identifying regarding sentience is having phenomenal conscious experience. Nagel pithily puts it as ''there is something it is like'' to be gertie or any other consciously experiencing being.
The question then is - Is it possible for there to be ''something it is like'' to be an AI. And would it be like being a human.
The answer is we don't know. Because we don't know what the necessary and sufficient conditions are for phenomenal experience.
When it comes to considering conscious AI, it might be that -
- Only particular types of functioning substrate (eg biological human bodies) possess certain the necessary and sufficient conditions, which computers or replicating resin doesn't have. (Hence non-biological AI is impossible).
- Or it might be that any matter interacting in certain ways is all it takes for phenomenal experience to manifest. (The computational theory of mind relies on this assumption).
- Or there might be something else going on we don't, perhaps can't, understand.
As things stand, we don't even know how we could go about knowing which of these is correct.
A thoughtful post. This subject offers much to think about.
But your first sentence stands apart from the rest.
''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.
It's the definition of "intelligence" that holds back our conversation. We all know what it is — roughly. But, like many other things, when we try to nail it down, we find we can't. Or at least, we find it extremely difficult and demanding. I think this may be why the development of AI has proved to be so challenging. We started trying to create "artificial intelligence" without a precise definition of what it is. And that's the problem.
To design any computer program, of any sort, we first need a requirements specification, a clear and precise description of what it is that we want to create. An aim. Without that, we don't know what to work toward, or whether or not we've got there yet. So yes, a useful and usable definition of intelligence is of paramount importance.
So, what is "intelligence"?
Wikipedia wrote:
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
[...]
The page goes on for a while, but probably only touches the surface of what intelligence is. I imagine books have been written on the subject, probably many books. And I also imagine they offer as many perspectives as there are books and authors on the subject.
So where do we go from here?
To me the issue of ''intelligence'' is a bit of a red herring. We know what computers do, and how they do it. The physical components, and as you say the software humans design to manipulates those components to achieve human purposes. Like Searle's Chinese Room, you end up with a closed system where no agency or consciousness is further required.
If such a computer could be programmed to autonomously acquire new skills and abilities, is a technical question. Whether it could autonomously replicate itself is another technical question. You'd know how likely that is better than me.
But the question of whether a computer could have
agency and make autonomous/unprogrammed
choices is down to whether it can have sentient qualities like goals, needs, preferences and desires. That's the question we don't know how to answer.
Where we go - is try it and see what happens. Bearing in mind that if a computer can achieve agency, goals and perhaps a sense of wellbeing, that has both risks and welfare implications we ought to think through. And such considerations are not best left in the hands of tech corporations and billionaire owners.