Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#468992
Sy Borg wrote: October 17th, 2024, 4:10 pm At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.

At some point, AI will be capable of self-replication. This is certain*.
Pattern-chaser wrote: October 18th, 2024, 7:45 am Not "certain", I suggest, but merely possible? For example, AIs have not yet been programmed to have the capability of replication. That they might be so programmed in the future makes your speculation possible, but not inevitable?
Sy Borg wrote: October 18th, 2024, 5:03 pm AIs are already programming better than a percentage of programmers.
1. In early 2023, OpenAI’s ChatGPT passed Google’s exam for high-level software developers.

2. Later in 2023, GitHub reported that 46% of code across all programming languages is built using Copilot, the company’s AI-powered developer tool.

3. Finally, DeepMind’s AlphaCode in its debut outperformed human programmers. When pitted against over 5,000 human participants, the AI beat 45% of expert programmers.
https://peterhdiamandis.medium.com/will ... 79a8ac4279

Do you think that AI will stop progressing, even though its progress has so far been exponential?
This is a bit like saying "Look, I can walk, and I can dress myself, so I can plan and (successfully) implement an assault on the Eiger's North Face". Without derailing into software design — the entire field — take it from me that there is a great deal more to software design than coding. The challenges you mention are difficult for many humans, but the ability to overcome these challenges is necessary but not sufficient to become a competent software designer. There's a lot more to it than that.

The article you quote was 'familiar' to software designers, even before AI was invented. Every 5–10 years, someone comes up with a scheme to de-skill "programming" or "coding", so that "anyone can do it". Experienced software designers have never taken much notice, because those who make the claims have no idea of the demands software design places upon the designer, or the skills that are needed.

The skills you mention, that AI can already achieve, represent a useful aid to a human designer. They take up some of the drudgery, as they do in other areas of human life too. This is valuable and useful.

Please note that the author of the article you quote is not a software designer, he's an 'entrepreneur'. Someone whose profession teaches you to make claims for products you have not yet built, or even planned. "Fake it 'til you make it". Even if you don't have the skills to create such a product. [Which probably also means that you don't understand the skills needed to realise your imagined future products.]

As an example, imagine that AI is asked to come up with a way to stop bullying. That is an area of expertise that you have mastered. Do you think AI can or could achieve that aim? Achieve it in a way that is at least equal, and maybe better, than a human expert like you could do?
Last edited by Pattern-chaser on October 19th, 2024, 6:22 am, edited 1 time in total.
Favorite Philosopher: Cratylus Location: England
#469000
The big change in AI is not the software decision making capabilities. It is in the source materials. In examining the concept of neural network, the field is wide open from ceramics to chemical reactions changing the idea of hardware. but IMO a Graph neural network might use what it is already there. Hypothesizing the intelligence of a mosquito: It is time IMO, to put aside the use of heat seeking missiles for something more constructive like the actual mosquito. Instead of blood sucking it could be an interchange of data from the output of the mosquito brain that is the edges of the neural network associated with the Graph neural network corresponding to a simulated blood sucking (data interchange). Obviously, the alteration of the mosquito purpose (what is the purpose of a mosquito?) will necessitate a model to predict the consequences. So, IMO programming the mosquito brain may necessitate the implant of mosquito brain cells programmed into the Graph neural network. It is a duality, since the mosquito will suck blood anyway but will feel the need to access the nodes (everywhere) like accommodating light poles to the neural network. Since this hypothesis is not a real example (to my knowledge), I will not venture into how the implant will affect the natural replicated mosquitoes. The actual data collected by the edges might be used in the decision-making graph neural network that unlike the mosquito might have a well-defined (corporate secret) purpose. In its evolution from a simple counter to a DNA sampler decades or centuries might come and go.
#469002
Pattern-chaser wrote: April 4th, 2024, 8:55 am
Yes! "Intelligence" has always been a misnomer, when applied to AI, and the like. Any intelligence they might display is a direct consequence of the intelligence of their design and programming, nothing more. It's the programmers who have the intelligence, not the machine.

AI, in its current state, is super-Google. As you describe.
Yes, that’s exactly right. The Musks, the Altmans and the rest of the tech lords will like you to believe their exaggerated claims about AGI (artificial general intelligence) and the alleged risk of the Singularity because the hype increases shareholder value to their companies. It’s all about the profits. That’s why OpenAI (ChatGPT) is close to calling they have “finally achieved” AGI, because that will allow them to break the contract with Microsoft.

For all those who have not been mentally hijacked by the AI hype, you might want to subscribe to Gary Marcus’ substack. He’s a prominent figure in the AI world who likes to keep the tech lords in check. There’s a very recent article from him in Fortune titled How Elon Musk, Sam Altman, and the Silicon Valley elite manipulate the public, you don’t want to miss that one.

Meanwhile, I’m excited about the new entry in my book club, a book penned by two computer scientists, titled AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. The book presentation looks promising:
Autor: Arvind Narayanan, Sayash Kapoor
Páginas: 357
From two of TIME's 100 Most Influential People in AI, what you need to know about AI—and how to defend yourself against bogus AI claims and products
Confused about AI and worried about what it means for your future and the future of the world? You're not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn't, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don't work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it's being built,...
Favorite Philosopher: Umberto Eco Location: Panama
#469003
Lagayscienza wrote: October 17th, 2024, 12:29 am
If/when autonomous SRSIMs become a reality, and are out “in the wild” and far enough away from us, we may lose control of them and be unable to foresee their continued “evolution”. Their “intelligence” will be different to ours, and it could develop much more quickly than ours did because they would not be hobbled by the slowness of biological evolution by natural selection and the physical limitations of biologically housed intelligence. The worry is that centuries or millennia from now, they may come back to bite us as much more powerful entities than they were when we first sent them out. Whether we’ll want to call them “life-forms” is purely academic. They'll be doing do a lot of things that life does.
As SRSIMs develop won't they require more and more energy? Where will they obtain massive amounts of energy?
Signature Addition: "Ad hominem attacks will destroy a good forum."
#469004
Pattern-chaser wrote: October 19th, 2024, 6:18 am
Sy Borg wrote: October 17th, 2024, 4:10 pm At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.

At some point, AI will be capable of self-replication. This is certain*.
Pattern-chaser wrote: October 18th, 2024, 7:45 am Not "certain", I suggest, but merely possible? For example, AIs have not yet been programmed to have the capability of replication. That they might be so programmed in the future makes your speculation possible, but not inevitable?
Sy Borg wrote: October 18th, 2024, 5:03 pm AIs are already programming better than a percentage of programmers.
1. In early 2023, OpenAI’s ChatGPT passed Google’s exam for high-level software developers.

2. Later in 2023, GitHub reported that 46% of code across all programming languages is built using Copilot, the company’s AI-powered developer tool.

3. Finally, DeepMind’s AlphaCode in its debut outperformed human programmers. When pitted against over 5,000 human participants, the AI beat 45% of expert programmers.
https://peterhdiamandis.medium.com/will ... 79a8ac4279

Do you think that AI will stop progressing, even though its progress has so far been exponential?
This is a bit like saying "Look, I can walk, and I can dress myself, so I can plan and (successfully) implement an assault on the Eiger's North Face". Without derailing into software design — the entire field — take it from me that there is a great deal more to software design than coding. The challenges you mention are difficult for many humans, but the ability to overcome these challenges is necessary but not sufficient to become a competent software designer. There's a lot more to it than that.

The article you quote was 'familiar' to software designers, even before AI was invented. Every 5–10 years, someone comes up with a scheme to de-skill "programming" or "coding", so that "anyone can do it". Experienced software designers have never taken much notice, because those who make the claims have no idea of the demands software design places upon the designer, or the skills that are needed.

The skills you mention, that AI can already achieve, represent a useful aid to a human designer. They take up some of the drudgery, as they do in other areas of human life too. This is valuable and useful.

Please note that the author of the article you quote is not a software designer, he's an 'entrepreneur'. Someone whose profession teaches you to make claims for products you have not yet built, or even planned. "Fake it 'til you make it". Even if you don't have the skills to create such a product. [Which probably also means that you don't understand the skills needed to realise your imagined future products.]

As an example, imagine that AI is asked to come up with a way to stop bullying. That is an area of expertise that you have mastered. Do you think AI can or could achieve that aim? Achieve it in a way that is at least equal, and maybe better, than a human expert like you could do?
I never mastered the prevention of bullying. I mastered the understanding of how bullying fell through the cracks f the legal system.

My job was replaced by AI about twelve years ago. In that time, I did not receive a single call from work, pleading for help.

Remember, what AI can do to day is tiny compared with what it will be able to do in one year's time, let alone one, ten, a hundred, a thousand or ten thousand years' time.
#469005
Mo_reese wrote: October 19th, 2024, 3:25 pm
Lagayscienza wrote: October 17th, 2024, 12:29 am
If/when autonomous SRSIMs become a reality, and are out “in the wild” and far enough away from us, we may lose control of them and be unable to foresee their continued “evolution”. Their “intelligence” will be different to ours, and it could develop much more quickly than ours did because they would not be hobbled by the slowness of biological evolution by natural selection and the physical limitations of biologically housed intelligence. The worry is that centuries or millennia from now, they may come back to bite us as much more powerful entities than they were when we first sent them out. Whether we’ll want to call them “life-forms” is purely academic. They'll be doing do a lot of things that life does.
As SRSIMs develop won't they require more and more energy? Where will they obtain massive amounts of energy?
There won't be any anti-nuclear energy activists in space, so that will help. Energy supplies will depend on what's available on other worlds.

I don't worry about them coming back to Earth. By that time, humans may already be extinct. I am not anticipating a quick jump into sentience, despite misconceptions spread by another about this.
#469011
Energy won't be a problem for SRSIMs. Like Earth, the trillions of planets in our galaxy have suns for solar energy. On Mars, the rovers use solar energy to get around. And there are other sources such as wind, tidal, geothermal, chemical and nuclear energy. They could construct Dyson spheres around low mass stars that would provide enormous amounts of energy for billions of years. The journey out into the galaxy could take millions of years and chances are that we will be extinct before they had time to return to Earth to cause us any problems. And if we are not extinct, scientific progress on Earth may mean we will have the means to protect ourselves from them if they were a threat.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469013
Beautiful quote from Samuel Butler is his 1872 novel, Erewhon. Lagaya, you might enjoy this:

‘There was a time, when the Earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.

Now if a human being had existed while the Earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures-possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?

‘Again. Consciousness, in anything like the present acceptation of the term, having been once a new thing – a thing, as far as we can see, subsequent even to an individual centre of action and to a reproductive system (which we see existing in plants without apparent consciousness) – why may not there arise some new phase of mind which shall be as different from all present known phases, as the mind of animals is from that of vegetables?

‘It would be absurd to attempt to define such a mental state (or whatever it may be called), inasmuch as it must be something so foreign to man that his experience can give him no help towards conceiving its nature; but surely when we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.’
#469015
Sy Borg wrote: October 20th, 2024, 12:30 am Beautiful quote from Samuel Butler is his 1872 novel, Erewhon. Lagaya, you might enjoy this:

‘There was a time, when the Earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.

Now if a human being had existed while the Earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures-possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?

‘Again. Consciousness, in anything like the present acceptation of the term, having been once a new thing – a thing, as far as we can see, subsequent even to an individual centre of action and to a reproductive system (which we see existing in plants without apparent consciousness) – why may not there arise some new phase of mind which shall be as different from all present known phases, as the mind of animals is from that of vegetables?

‘It would be absurd to attempt to define such a mental state (or whatever it may be called), inasmuch as it must be something so foreign to man that his experience can give him no help towards conceiving its nature; but surely when we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.’
Yes, I hadn't seen that before. Thanks for posting it. Amazing that it was written so long ago, when much of what we now know was still unknown. At present, we only have a sample size of 1 but that does not mean Earth is the only speck in the universe on which sentience and intelligence have developed. And it does not mean there are not different ways in which different types of intelligence might evolve - types which we currently have not even imagined. And, for all we know, ETSRSIMs might already be out there developing among the trillions of planets in our galaxy alone.

We once thought that Earth was the center of the universe and that the planets were just other stars that circled Earth. Then we realized that they were planets that circled the sun just like Earth. Then we realized that other stars were actually suns just like ours. Then we started discovering planets around other stars - many of them in zones where something like life as we know it could have evolved. And, possibly, where life as we cannot even imagine it might have developed. I think a biocentric view of life, sentience and intelligence will eventually go the way of a geocentric view of the universe. The notion that we are the be-all and end-all at the center of everything belongs in the past.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469017
Lagayscienza wrote: October 20th, 2024, 1:41 am
Sy Borg wrote: October 20th, 2024, 12:30 am Beautiful quote from Samuel Butler is his 1872 novel, Erewhon. Lagaya, you might enjoy this:

‘There was a time, when the Earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.

Now if a human being had existed while the Earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures-possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?

‘Again. Consciousness, in anything like the present acceptation of the term, having been once a new thing – a thing, as far as we can see, subsequent even to an individual centre of action and to a reproductive system (which we see existing in plants without apparent consciousness) – why may not there arise some new phase of mind which shall be as different from all present known phases, as the mind of animals is from that of vegetables?

‘It would be absurd to attempt to define such a mental state (or whatever it may be called), inasmuch as it must be something so foreign to man that his experience can give him no help towards conceiving its nature; but surely when we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.’
Yes, I hadn't seen that before. Thanks for posting it. Amazing that it was written so long ago, when much of what we now know was still unknown. At present, we only have a sample size of 1 but that does not mean Earth is the only speck in the universe on which sentience and intelligence have developed. And it does not mean there are not different ways in which different types of intelligence might evolve - types which we currently have not even imagined. And, for all we know, ETSRSIMs might already be out there developing among the trillions of planets in our galaxy alone.

We once thought that Earth was the center of the universe and that the planets were just other stars that circled Earth. Then we realized that they were planets that circled the sun just like Earth. Then we realized that other stars were actually suns just like ours. Then we started discovering planets around other stars - many of them in zones where something like life as we know it could have evolved. And, possibly, where life as we cannot even imagine it might have developed. I think a biocentric view of life, sentience and intelligence will eventually go the way of a geocentric view of the universe. The notion that we are the be-all and end-all at the center of everything belongs in the past.
It wasn't so long after Darwin's Origin of Species, so people were probably thinking hard about evolution and its potentials. Now it's just par of the intellectual furniture and is perhaps taken for granted. It's easy to take as a give that the Earth was once a sphere of molten rock, complete with waves of lava after the Theia collision. Imagining life and sentience emerging on such a world would seem as absurd as imagining future intelligent life on Venus.

I also find the idea absurd that humans are as advanced as it gets on Earth, that it's impossible for life to be any more than incrementally more sapient than us. I suspect that many people simply assume that we'll soon all die out, that we live in The End of Days.
#469020
Yes, Darwin enable us to think about life differently. It will only take the discovery of a single microbe on, say, Europa or Enceladus to demolish the idea that we on Earth are where all the action is. I'm pretty certain that will happen one day in the not too distant future. I'm only sorry I probably won't be around to see it. And, once we have a sample of 2, and especially if that 2nd sample is of a type of life very different to what we know on Earth, a serious rethink of what life is will be needed. Maybe carbon based life isn't all there is.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469023
Sy Borg wrote: October 19th, 2024, 3:44 pm Remember, what AI can do to day is tiny compared with what it will be able to do in one year's time, let alone one, ten, a hundred, a thousand or ten thousand years' time.
For the present, AI is wholly and solely under the control of humans. AI cannot change or develop. Human AI programmers, however, can, and probably will, 'develop' the capabilities of AI in all kinds of ways. If AI is to evolve into SkyNet, or something like it, there are many steps to be taken first.

AI cannot approach self-awareness or sentience without human complicity. And for as long as the present situation remains, this will also remain. Without some sort of 'sentience', self-direction is impossible. If humans give AIs the ability of self-control, self-modification, then all bets are off. But there is a long way to go before that becomes possible and practical, in the real world. We can't just 'let them off the leash', because the AIs could do nothing even if we did. We would have to add a lot more than is currently there.

At present, AI is little more than Super-Google. But who knows what will happen? As you say, none of us know what the future holds...
Favorite Philosopher: Cratylus Location: England
#469028
''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.

We  think of ''sentience'' as something humans and some other animal species have, because that's what we encounter and identify as presumably sentient beings (relying on physiological and behavioural similarity to ourselves). 

The key thing we're identifying regarding sentience is having phenomenal conscious experience.  Nagel pithily puts it as ''there is something it is like'' to be gertie or any other consciously experiencing being. 

The question then is - Is it possible for there to be ''something it is like'' to be an AI.  And would it be like being a human.


The answer is we don't know.  Because we don't know what the necessary and sufficient conditions are for phenomenal experience. 


When it comes to considering conscious AI, it might be that -

- Only particular types of functioning substrate (eg biological human bodies) possess certain the necessary and sufficient conditions, which computers or replicating resin doesn't have. (Hence non-biological AI is impossible).

- Or it might be that any matter interacting in certain ways is all it takes for phenomenal experience to manifest. (The computational theory of mind relies on this assumption). 

- Or there might be something else going on we don't, perhaps can't, understand.

As things stand, we don't even know how we could go about knowing which of these is correct. 
#469030
Gertie wrote: October 20th, 2024, 9:12 am ''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.

We  think of ''sentience'' as something humans and some other animal species have, because that's what we encounter and identify as presumably sentient beings (relying on physiological and behavioural similarity to ourselves). 

The key thing we're identifying regarding sentience is having phenomenal conscious experience.  Nagel pithily puts it as ''there is something it is like'' to be gertie or any other consciously experiencing being. 

The question then is - Is it possible for there to be ''something it is like'' to be an AI.  And would it be like being a human.


The answer is we don't know.  Because we don't know what the necessary and sufficient conditions are for phenomenal experience. 


When it comes to considering conscious AI, it might be that -

- Only particular types of functioning substrate (eg biological human bodies) possess certain the necessary and sufficient conditions, which computers or replicating resin doesn't have. (Hence non-biological AI is impossible).

- Or it might be that any matter interacting in certain ways is all it takes for phenomenal experience to manifest. (The computational theory of mind relies on this assumption). 

- Or there might be something else going on we don't, perhaps can't, understand.

As things stand, we don't even know how we could go about knowing which of these is correct. 
A thoughtful post. This subject offers much to think about.

But your first sentence stands apart from the rest.

''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.

It's the definition of "intelligence" that holds back our conversation. We all know what it is — roughly. But, like many other things, when we try to nail it down, we find we can't. Or at least, we find it extremely difficult and demanding. I think this may be why the development of AI has proved to be so challenging. We started trying to create "artificial intelligence" without a precise definition of what it is. And that's the problem.

To design any computer program, of any sort, we first need a requirements specification, a clear and precise description of what it is that we want to create. An aim. Without that, we don't know what to work toward, or whether or not we've got there yet. So yes, a useful and usable definition of intelligence is of paramount importance.

So, what is "intelligence"?
Wikipedia wrote: Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

[...]
The page goes on for a while, but probably only touches the surface of what intelligence is. I imagine books have been written on the subject, probably many books. And I also imagine they offer as many perspectives as there are books and authors on the subject.

So where do we go from here?
Favorite Philosopher: Cratylus Location: England
#469034
Pattern-chaser wrote: October 20th, 2024, 9:40 am
Gertie wrote: October 20th, 2024, 9:12 am ''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.

We  think of ''sentience'' as something humans and some other animal species have, because that's what we encounter and identify as presumably sentient beings (relying on physiological and behavioural similarity to ourselves). 

The key thing we're identifying regarding sentience is having phenomenal conscious experience.  Nagel pithily puts it as ''there is something it is like'' to be gertie or any other consciously experiencing being. 

The question then is - Is it possible for there to be ''something it is like'' to be an AI.  And would it be like being a human.


The answer is we don't know.  Because we don't know what the necessary and sufficient conditions are for phenomenal experience. 


When it comes to considering conscious AI, it might be that -

- Only particular types of functioning substrate (eg biological human bodies) possess certain the necessary and sufficient conditions, which computers or replicating resin doesn't have. (Hence non-biological AI is impossible).

- Or it might be that any matter interacting in certain ways is all it takes for phenomenal experience to manifest. (The computational theory of mind relies on this assumption). 

- Or there might be something else going on we don't, perhaps can't, understand.

As things stand, we don't even know how we could go about knowing which of these is correct. 
A thoughtful post. This subject offers much to think about.

But your first sentence stands apart from the rest.

''Intelligence'' can be defined in a functional way, as something computers can do without being sentient themselves, as the ability to solve problems.

It's the definition of "intelligence" that holds back our conversation. We all know what it is — roughly. But, like many other things, when we try to nail it down, we find we can't. Or at least, we find it extremely difficult and demanding. I think this may be why the development of AI has proved to be so challenging. We started trying to create "artificial intelligence" without a precise definition of what it is. And that's the problem.

To design any computer program, of any sort, we first need a requirements specification, a clear and precise description of what it is that we want to create. An aim. Without that, we don't know what to work toward, or whether or not we've got there yet. So yes, a useful and usable definition of intelligence is of paramount importance.

So, what is "intelligence"?
Wikipedia wrote: Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

[...]
The page goes on for a while, but probably only touches the surface of what intelligence is. I imagine books have been written on the subject, probably many books. And I also imagine they offer as many perspectives as there are books and authors on the subject.

So where do we go from here?
To me the issue of ''intelligence'' is a bit of a red herring.  We know what computers do, and how they do it.  The physical components, and as you say the software humans design to  manipulates those components to achieve human purposes.  Like Searle's Chinese Room, you end up with a closed system where no agency or consciousness is further required.

If such a computer could be programmed to autonomously acquire new skills and abilities, is a technical question.  Whether it could autonomously replicate itself is another technical question.  You'd know how likely that is better than me.

But the question of whether a computer could have agency and make autonomous/unprogrammed choices is down to whether it can have sentient qualities like goals, needs, preferences and desires.  That's the question we don't know how to answer.

Where we go - is try it and see what happens. Bearing in mind that if a computer can achieve agency, goals and perhaps a sense of wellbeing, that has both risks and welfare implications we ought to think through. And such considerations are not best left in the hands of tech corporations and billionaire owners.
  • 1
  • 7
  • 8
  • 9
  • 10
  • 11
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

Right. “What are the choices? Grin, bear it, issue[…]

Emergence can't do that!!

I'm woefully ignorant about the scientific techn[…]

Q. What happens to a large country that stops gath[…]

How do I apply with you for the review job involve[…]