Page 2 of 9
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 21st, 2022, 9:09 am
by Pattern-chaser
GrayArea wrote: ↑December 18th, 2022, 8:32 am
Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Leontiskos wrote: ↑December 19th, 2022, 5:24 pm
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
Moreno wrote: ↑December 20th, 2022, 7:29 am
But they don't just program the computers behavior. They are allowing for learning, which means that new behaviors and heuristics can be arrived about by the AI's. This may not entail that it is conscious or self-aware, but as far as its behavior and potential dangers AI presents, this does not matter. Just as it could adjust the electric grid, say, for out benefit, making it more efficient, it could 'decide' to have different goals, even as a non-conscious learning thingie. They can develop their own goals given that AI creators are modeling AIs on neural networks that are extremely flexible. Rather than the now primitive, If X, then do Y type programming.
The discussion you are having is an important aspect of this topic, to be sure. But I think we could (should?) also consider the nature of
non-artificial "intelligence". Ours is based on a biological platform, not a silicon one, and its evolution quite different from a programming project, but perhaps the results are similar, or even the same? Would it be acceptable or useful, do we think, to consider our own 'programming' — i.e. intelligence — in this light?
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 21st, 2022, 9:20 am
by Stoppelmann
Stoppelmann wrote: ↑December 20th, 2022, 6:36 am
Bernardo Kastrup: We are not at liberty to think of consciousness as independent of the medium, and by consciousness here I mean dissociated private conscious in the life of the type you and I have. We are not at liberty to separate that from its medium, because nature is telling us it seems to happen only in a certain medium, namely biology: warm, moist, carbon-based organisms that metabolize.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 21st, 2022, 11:42 am
by Leontiskos
Pattern-chaser wrote: ↑December 21st, 2022, 9:09 amThe discussion you are having is an important aspect of this topic, to be sure. But I think we could (should?) also consider the nature of non-artificial "intelligence". Ours is based on a biological platform, not a silicon one, and its evolution quite different from a programming project, but perhaps the results are similar, or even the same? Would it be acceptable or useful, do we think, to consider our own 'programming' — i.e. intelligence — in this light?
You are of course right that in order to understand what is meant by artificial intelligence one must understand what non-artificial intelligence is. "Artificial intelligence" is not an inapt term, for AI is human artifice mimicking intelligence. The difficulty is that the entire paradigm of "artificial intelligence" is based on materialism, and this tends to collapse the distinction between artificial and non-artificial intelligence. (This is not odd, for our entire society is swamped in materialism.)
Someone who believes that human intelligence is nothing more than the product of electrical (or electro-chemical) impulses will naturally conclude that computers and human intelligence are qualitatively similar, for computers are nothing more than the product of electrical impulses. Something like the Baader-Meinhof effect ensures that individuals and societies which are immersed in computational realities will begin to perceive all of reality along computational lines, including human beings. The Bible testifies to the same truth when it tells us that the sculptors of idols will come to be like the idols they have made (Psalm 115:4-8). It is not that AI has become intelligent, but rather that the human being (and the notion of human intelligence) has been reduced to unintelligence, i.e. mere computational juggling.* Thus the fundamental error of those who believe AI could be sentient or conscious is an anthropological error--a failure to understand human intelligence.
* We actually see a very similar phenomenon with respect to the burgeoning vegetarianism, where those who wish to raise up animals to the level of human dignity inadvertently abolish the human's dignity, reducing them to the level of brute beasts. Thus reduced, humans are only bound by the "law of the jungle."
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 21st, 2022, 5:22 pm
by Gertie
Stoppelmann wrote: ↑December 21st, 2022, 9:20 am
Stoppelmann wrote: ↑December 20th, 2022, 6:36 am
Bernardo Kastrup: We are not at liberty to think of consciousness as independent of the medium, and by consciousness here I mean dissociated private conscious in the life of the type you and I have. We are not at liberty to separate that from its medium, because nature is telling us it seems to happen only in a certain medium, namely biology: warm, moist, carbon-based organisms that metabolize.
I read the whole quote you posted and Kastrup is right in that we don't know the necessary and sufficient conditions for phenomenal experience. Which incidentally also means we don't know how to test whether AI or anything else is conscious because consciousness is private (other people, daffodils, rocks, particles, etc). This is the 'mind-body problem'
https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem , and we don't even know how to go about finding out.
We have discovered neural correlation for brained organisms, so we assume that organisms with brains like ours have consciousness, and we can observe behaviour - does the behaviour look like what we think conscious behaviour looks like. As Kastrup says, it's inference from analogy.
As we don't know the necessary and sufficient conditions for consciousness, we don't know the role of the substrate, whether brains supply something necessary which computers don't. But we do know brain processes are incredibly complex, and we can functionally describe them as processing information 'sensed' in the environment. So the thinking goes that if we can build an incredibly complex information processor as similar as possible to a brain, that might capture the necessary and sufficient conditions for consciousness. But nobody knows if it will, because we don't know if an organic substrate is necessary. And we won't know how to reliably test an AI for private consciousness either, some computers have already passed the Turing test. And as I said in my post, we don't know if AI consciousness would be recognisably like our own.
It's a step in the dark, a matter of building it and seeing what happens, then trying to understand what we've built.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 22nd, 2022, 12:59 am
by Stoppelmann
Gertie wrote: ↑December 21st, 2022, 5:22 pm
We have discovered neural correlation for brained organisms, so we assume that organisms with brains like ours have consciousness, and we can observe behaviour - does the behaviour look like what we think conscious behaviour looks like. As Kastrup says, it's inference from analogy.
I think that the big problem with conceiving AI as able to be conscious has to do with a fairly modern idea that the human brain is like a computer. Iain McGilchrist spoke about this in his last book:
The ‘hard problem’ gives rise in some minds to the reconceiving of apparently human subjects as zombies, a popular topic of current philosophical debate; in others to doubting the difference between people and machines, a widespread and even automatic assumption of modern neuroscience and cognitivist philosophy. This goes beyond playing with ideas. That we are effectively no different from zombies or machines is to some a revealing insight: similar conclusions are common in, indeed characteristic of, schizophrenia. […]
Most people who ever lived, and most people alive now around the world, would correctly consider these assessments of the human condition to be a sign, not of wise insight, but of madness. In the world of philosophy, they first showed up in the mind of Descartes, who found he had no means of disproving that the people he could see from his window were automata; and they have proved hard to dislodge from Western thinking ever since.
Giovanni Stanghellini wrote a book about how the schizophrenic mind becomes possessed by such thoughts.346 It is easy to see: RD Laing reports a schizoid patient who saw his wife as a mechanism:
She was an ‘it’ because everything she did was a predictable, determined response. He would, for instance, tell her (it) an ordinary funny joke and when she (it) laughed this indicated her (its) entirely ‘conditioned’, robot-like nature …
McGilchrist, Iain . The Matter With Things: Our Brains, Our Delusions and the Unmaking of the World (pp. 1710-1711). Perspectiva Press. Kindle Edition.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 22nd, 2022, 6:24 pm
by Gertie
Stoppelmann wrote: ↑December 22nd, 2022, 12:59 am
Gertie wrote: ↑December 21st, 2022, 5:22 pm
We have discovered neural correlation for brained organisms, so we assume that organisms with brains like ours have consciousness, and we can observe behaviour - does the behaviour look like what we think conscious behaviour looks like. As Kastrup says, it's inference from analogy.
I think that the big problem with conceiving AI as able to be conscious has to do with a fairly modern idea that the human brain is like a computer. Iain McGilchrist spoke about this in his last book:
The ‘hard problem’ gives rise in some minds to the reconceiving of apparently human subjects as zombies, a popular topic of current philosophical debate; in others to doubting the difference between people and machines, a widespread and even automatic assumption of modern neuroscience and cognitivist philosophy. This goes beyond playing with ideas. That we are effectively no different from zombies or machines is to some a revealing insight: similar conclusions are common in, indeed characteristic of, schizophrenia. […]
Most people who ever lived, and most people alive now around the world, would correctly consider these assessments of the human condition to be a sign, not of wise insight, but of madness. In the world of philosophy, they first showed up in the mind of Descartes, who found he had no means of disproving that the people he could see from his window were automata; and they have proved hard to dislodge from Western thinking ever since.
Giovanni Stanghellini wrote a book about how the schizophrenic mind becomes possessed by such thoughts.346 It is easy to see: RD Laing reports a schizoid patient who saw his wife as a mechanism:
She was an ‘it’ because everything she did was a predictable, determined response. He would, for instance, tell her (it) an ordinary funny joke and when she (it) laughed this indicated her (its) entirely ‘conditioned’, robot-like nature …
McGilchrist, Iain . The Matter With Things: Our Brains, Our Delusions and the Unmaking of the World (pp. 1710-1711). Perspectiva Press. Kindle Edition.
The problem really does boil down to us not understanding the mind-body relationship. If we only knew enough to specify the necessary and sufficient conditions for experience, we'd know if in principle conscious AI is possible (eg if the type of substrate matters), and what elements would need to be built into it.
The McGilchrist quote takes cheap shots at some significant contributions to evaluating that problem. I believe he's some sort of idealist or panpsychist himself, I expect many people would consider it mad to believe rocks, daffodils, toasters and particles are made out of consciousness, without going into the thinking behind it.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 22nd, 2022, 7:32 pm
by Sculptor1
AI will be neither of these things, and it would be an abuse of language to so describe it.
An AI will operate within the parameters of its programming. It will never act in the interests of itself or of humans. It will do what the code instructs it to do with no purpose.
Until you understand this simple fact you will never fully understand what an AI is or how it operates.
Whilst such actions might appear altruistic or selfish; and whilst that might be to the advantage or disadvantage of humans none of these acts will be performed with any intents in "mind" since an AI has no mind.
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 22nd, 2022, 7:33 pm
by Sculptor1
GrayArea wrote: ↑December 18th, 2022, 8:39 pm
Count Lucanor wrote: ↑December 18th, 2022, 7:59 pm
GrayArea wrote: ↑December 18th, 2022, 8:32 am
Hi all,
Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
You are right in that there has been no sentient A.I so far that feels anything. I just think that this has less to do with where we are right now, but more to do with how far we are willing to go.
AI will be neither of these things, and it would be an abuse of language to so describe it.
An AI will operate within the parameters of its programming. It will never act in the interests of itself or of humans. It will do what the code instructs it to do with no purpose.
Until you understand this simple fact you will never fully understand what an AI is or how it operates.
Whilst such actions might appear altruistic or selfish; and whilst that might be to the advantage or disadvantage of humans none of these acts will be performed with any intents in "mind" since an AI has no mind.
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 4:35 am
by Stoppelmann
Gertie wrote: ↑December 22nd, 2022, 6:24 pm
The McGilchrist quote takes cheap shots at some significant contributions to evaluating that problem. I believe he's some sort of idealist or panpsychist himself, I expect many people would consider it mad to believe rocks, daffodils, toasters and particles are made out of consciousness, without going into the thinking behind it.
I think that after 1710 pages (of 2996), you can assume that he has considered more than this quote, which was a pointer to the dangers of misconception that have already shown themselves in society, and not really taking cheap shots.
Iain McGilchrist is a psychiatrist, writer, and former Oxford literary scholar, but not an idealist or panpsychist. You may mean Bernardo Kastrup, who is an “analytical idealist” in his own words.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 4:42 am
by Stoppelmann
Sculptor1 wrote: ↑December 22nd, 2022, 7:33 pm
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Considering that Richard Dawkins used the term “selfish” to describe genes, I don’t think that it is only primitive people who used anthropomorphised descriptive language to get a point over. It is commonly used in classical poetry as well, and makes the undulating sea, the aggressive wind, the deadly mountain and numerous other experiences perceptible for the reader.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 11:10 am
by Pattern-chaser
Sculptor1 wrote: ↑December 22nd, 2022, 7:33 pm
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Stoppelmann wrote: ↑December 23rd, 2022, 4:42 am
Considering that Richard Dawkins used the term “selfish” to describe genes, I don’t think that it is only primitive people who used anthropomorphised descriptive language to get a point over. It is commonly used in classical poetry as well, and makes the undulating sea, the aggressive wind, the deadly mountain and numerous other experiences perceptible for the reader.
It appears this
anthropomorphism also blends into
metaphor, in this example if nowhere else? It looks like anthropomorphism might be being used to create metaphors, or something like that? And, as we all know, metaphor is everywhere in our language, round every corner, under every stone.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 12:16 pm
by Stoppelmann
Pattern-chaser wrote: ↑December 23rd, 2022, 11:10 am
Sculptor1 wrote: ↑December 22nd, 2022, 7:33 pm
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Stoppelmann wrote: ↑December 23rd, 2022, 4:42 am
Considering that Richard Dawkins used the term “selfish” to describe genes, I don’t think that it is only primitive people who used anthropomorphised descriptive language to get a point over. It is commonly used in classical poetry as well, and makes the undulating sea, the aggressive wind, the deadly mountain and numerous other experiences perceptible for the reader.
It appears this anthropomorphism also blends into metaphor, in this example if nowhere else? It looks like anthropomorphism might be being used to create metaphors, or something like that? And, as we all know, metaphor is everywhere in our language, round every corner, under every stone.
Quite ...
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 1:31 pm
by Sculptor1
Stoppelmann wrote: ↑December 23rd, 2022, 4:42 am
Sculptor1 wrote: ↑December 22nd, 2022, 7:33 pm
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Considering that Richard Dawkins used the term “selfish” to describe genes, I don’t think that it is only primitive people who used anthropomorphised descriptive language to get a point over. It is commonly used in classical poetry as well, and makes the undulating sea, the aggressive wind, the deadly mountain and numerous other experiences perceptible for the reader.
Richard Dawkin took pains to demonstrate that the selfishness he was talking about was NOT intentional.
It's what is called a metaphor.
Had you read Dawkins you might have realised your error here.
You are just avoiding the issue. We are not talking about classical poetry but supposedly computer science.
Please keep on thread.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 1:35 pm
by Sculptor1
Pattern-chaser wrote: ↑December 23rd, 2022, 11:10 am
Sculptor1 wrote: ↑December 22nd, 2022, 7:33 pm
If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
Stoppelmann wrote: ↑December 23rd, 2022, 4:42 am
Considering that Richard Dawkins used the term “selfish” to describe genes, I don’t think that it is only primitive people who used anthropomorphised descriptive language to get a point over. It is commonly used in classical poetry as well, and makes the undulating sea, the aggressive wind, the deadly mountain and numerous other experiences perceptible for the reader.
It appears this anthropomorphism also blends into metaphor, in this example if nowhere else? It looks like anthropomorphism might be being used to create metaphors, or something like that? And, as we all know, metaphor is everywhere in our language, round every corner, under every stone.
Yes, true everywhere. It is a great human failing and a legacy of primitive thinking.
But we are talking about a computer program not a conscious being.
There is not the slightest hint of consciousness in computer hardware, and there is no prospect given the current designs of including it.
Outside sci-fi and "classical poetry", AI cannot be described in these terms, any more than a falling branch can be blamed for denting your car. Yet primitive humans will be seen to curse the branch, or the weather.
Re: Will Sentient A.I be more altruistic than selfish?
Posted: December 23rd, 2022, 3:06 pm
by Stoppelmann
Sculptor1 wrote: ↑December 23rd, 2022, 1:31 pm
Richard Dawkin took pains to demonstrate that the selfishness he was talking about was NOT intentional.
It's what is called a metaphor.
Had you read Dawkins you might have realised your error here.
You are just avoiding the issue. We are not talking about classical poetry but supposedly computer science.
Please keep on thread.
You always seem keen on correcting people, whereas you started it off, talking about primitive people, which I pointed out was wrong because it is simply descriptive language (metaphor). My point is that people get computers wrong, because at some time, human brains were compared to computers, and consequently computers get compared with humans.