Page 1 of 8

Will Sentient A.I be more altruistic than selfish?

Posted: December 18th, 2022, 8:32 am
by GrayArea
Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?

Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?

Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.

Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?

We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 18th, 2022, 9:31 am
by Pattern-chaser
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?

Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?

Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.

Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?

We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
"How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 18th, 2022, 10:42 am
by GrayArea
Pattern-chaser wrote: December 18th, 2022, 9:31 am
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?

Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?

Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.

Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?

We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
"How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
Though I do feel like when it comes to sentient A.Is, the scientists will take a direction where all they do is simply create an empty brain—just a vessel for consciousness—and then let the A.I fill it up by itself, making choices fully on its own.

Even though in this scenario the A.I won't be able to change or add to its programming, it would still have the freedom to either lean more towards self-interest or altruism.

Would it be possible for us to control the A.I's behaviors only through controlling its vessel for consciousness, instead of the actual content of its consciousness in which its behaviors are a part of? I'm not entirely sure, but I would be open to the possibilities.

Perhaps like I briefly mentioned before, there could indeed be different ways to model an artificial brain that would make the brain more leaned towards either self-interest or altruism. I imagine the kind of artificial brain that has a weaker sense of self would be more likely to be altruistic and vice versa. Perhaps we could create an artificial brain modeled after the human brain under the effects of psychedelics such as LSD, which is believed to weaken one's sense of self and increase a sense of interconnectedness with the world.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 18th, 2022, 7:59 pm
by Count Lucanor
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 18th, 2022, 8:39 pm
by GrayArea
Count Lucanor wrote: December 18th, 2022, 7:59 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
You are right in that there has been no sentient A.I so far that feels anything. I just think that this has less to do with where we are right now, but more to do with how far we are willing to go.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 18th, 2022, 11:18 pm
by Count Lucanor
GrayArea wrote: December 18th, 2022, 8:39 pm
Count Lucanor wrote: December 18th, 2022, 7:59 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
You are right in that there has been no sentient A.I so far that feels anything. I just think that this has less to do with where we are right now, but more to do with how far we are willing to go.
Where we are right now does have implications on how far we can go, and how far we can go is certainly a constraint to consider in what we want to achieve and what efforts we should invest on it.

https://iep.utm.edu/chinese-room-argument/

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 19th, 2022, 12:03 am
by Leontizkos
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 19th, 2022, 12:25 pm
by Pattern-chaser
GrayArea wrote: December 18th, 2022, 8:32 am How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
Pattern-chaser wrote: December 18th, 2022, 9:31 am "How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
GrayArea wrote: December 18th, 2022, 10:42 am Though I do feel like when it comes to sentient A.Is, the scientists will take a direction where all they do is simply create an empty brain—just a vessel for consciousness—and then let the A.I fill it up by itself, making choices fully on its own.

Even though in this scenario the A.I won't be able to change or add to its programming, it would still have the freedom to either lean more towards self-interest or altruism.
OK, I won't quibble about exactly what "programming" refers to. But if the AI has the freedom to "lean towards" this or that, or if it is "just a vessel" that it fills for itself, then it is out if the control of its creators, and its future actions will be unpredictable, and getting more so as it continues to make its own 'adaptions' to the world as it 'sees' it.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 19th, 2022, 1:59 pm
by Gertie
Hi Gray Area

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.

As we don't know what the necessary and sufficient conditions for conscious experience are (ie we don't understand the mind-body relationship, and don't even know how we could go about understanding it), we don't know if AI is possible.
But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?


Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?
Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.
Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?
We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.
How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?

I too don't know how to predict a self-learning/unprogrammed AI's nature, what it would be like to be such a being.  Or if our notions of altruism, self, will or anything else would be close to what it's like to be an AI.  We're at the stage of trying to build one and seeing what happens.  I don't equate intelligence with altruism tho.  Human altruism results from a specific evolutionary history as social mammals, if something similar isn't programmed in, I wouldn't expect it to naturally pop up via increasingly complex programming processes. Or what 'good' might mean to such a critter - maybe the satisfaction of more information stimulation would be what it values, or tasty electricity, who knows. And it might have no way of empathetically understanding what we value.

Basically if we create something more intelligent than us  with agency we can't control, sci fi tells us don't give it legs and keep the off button handy till you know what you're dealing with! 

Ideally we'd learn to live and work together for mutual benefit, realising that as sentient creatures they'd not just be our slaves. But in a capitalist world where Zuckerberg and Musk types will be largely controlling the way we proceed based on commercial exploitation, it's not very reassuring.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 19th, 2022, 2:00 pm
by Gertie
oops sorry for messing the quotes up

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 19th, 2022, 3:48 pm
by Moreno
Pattern-chaser wrote: December 18th, 2022, 9:31 am "How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
Even if we don't all the unit to change or add to its own programming, as long as it is extremely complicated, we don't know what will happen. They estimate the costs for preventing (more) problems from y2k at 100 Billion dollars in the US. And that was actually a fairly easy set of possible possible to predict. It does matter if the AI is somehow connected to the web and/or can manage to do this. But I am skeptical that we have the ability to know and control all variables. And now problems are global. Nanotech, gm products and AI may all affect every single cell on the planet. Global warming, it seems to me, is less of a threat. Yes, it could cause billions of deaths, but life, including human life would continue. Mess up with these other things.....

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 19th, 2022, 5:24 pm
by Leontiskos
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 20th, 2022, 6:36 am
by Stoppelmann
Bernardo Kastrup was asked whether his philosophical position, namely analytic idealism, provides any basis for claims that silicon computers are conscious. He says, “None whatsoever, because what analytic idealism says is that everything is in consciousness, not that everything is conscious, these are two completely different statements.”

In a video, he goes on to say the following:
Now what about AI - artificial intelligence? I have something to say about this because it it's a topic very close to me. I also have a doctorate in computer engineering, computer science, and I did work with AI even when I was at CERN. Back in the day, the theory of AI in the 90s was largely the same as today, the only difference is that we have faster computers today, so we can do much more. But the theory is still largely the same: neuronal networks, back propagation, you know, non-linear transfer functions, all that good stuff. And already in the 90s we could build data acquisition systems for physics experiments at CERN, that could identify physics data just as good as a physicist but much faster. It could make a decision every 25 nanoseconds, so you could say that the leader of artificial neural networks we built back then at least for that class of problem, whereas intelligent as a human physicist so they were intelligent and I would say yes, intelligence is a measurable property of a system, you can measure it from the outside; you can measure how a system responds to environmental challenges and responds to data, so it's objectively measurable and we can build artificial intelligence, intelligent systems. We are already doing that, and I see no a priori reason why we couldn't in the future build a system that is as intelligent as a human for much more classes of problems, perhaps even all classes of problems that a human comes across. The problem is that in the community of AI they conflate often intelligence with consciousness. So, they think that an artificial intelligent computer is also a conscious computer in the sense of having a private experiential inner life of its own; its own subjective perspective into the world, but these two things are completely different.

Consciousness is not a objectively measurable property from the outside there is no way to determine whether a computer or a calculator or an abacus has its own conscious point of view into the world, the only way to know it is to be the thing. The only way to know if a computer is conscious is to be the computer. And this conflation leads to all kinds of absurd implications. You might think that someone like me, who says consciousness is the fabric of reality, that the implication then is that computers are conscious. Because computers exist, existence is at a foundational level consciousness, so, everything's conscious. No! Absolutely not! There is a fundamental difference between the following two statements:
Statement number one: everything is in consciousness and made of consciousness
Statement number two: everything is conscious in and of itself.

To say that everything is in consciousness is different than to say that everything is conscious. When we say that a computer is conscious, what we mean is that it has its own dissociated private in their life, and idealism does not imply that that is the case at all. Under idealism there are dissociated Alters of the universal consciousness and living beings are examples of those, but not computers. Well, why make this difference? Well, for the same reason that I don't think a cup is conscious or that the floor tiles are conscious, or that's this chair is conscious. Nature tells us empirically that we are conscious, we have a private conscious in our life of our own. I cannot read your thoughts you cannot read mine. My conscience inner life is private. Now, your behaviour is analogous to mine, and you are analogous to me in structure and medium. You are a metabolizing carbon-based, wet, moist, living creature, whose behaviour is analogous to mine, so I have very good empirical reasons to think that you too have a private conscious inner life of your own, and I could play this game down to bacteria. My cats look different from me from the outside, but if I zoom in with a microscope, they're identical to me. They are also carbon based, warm, moist, organisms that metabolize, that do DNA transcription, protein folding, ATP burning, mitosis, all that good stuff that inherits in metabolism. Even an amoeba metabolizes, and even an amoeba or a paramecium, single-celled organisms, have behaviour in some way analogous to mine. Paramecia, they go after food, and they run from danger. Amoebae construct little houses out of mud particles, and they metabolize at a microscopic level. They are very much like me, so I grant them the hypothesis that they too have conscious in their life of their own, whereas silicon computer is a completely different thing. It's not a carbon-based, warm, moist, organism that metabolizes, it's a silicon-based thing that operates according to electric fields and switches that open and close.

We have no empirical reason to think that silicon computers too are what dissociative processes in the mind of nature look like. Absolutely no empirical reason to make that jump. It's an entirely arbitrary jump and the reason this jump is made in the AI communities the following: AI researchers confuse computation with consciousness. Computation is a concept we created. We invented, the notion of computation, and we invented it in such a way as to abstract from the medium. So, an abacus was made of wood computes, and a computer a modern computer made of silicon and running electricity computes, because we defined the meaning of the word computation to be independent of the underlying medium. Anything can compute if it changes states. Your light switch in your living room can have two states, you know, turn the lights on, turn the lights off. You flip it between two states, that's a computation why because it defines the concept of computation such, that it abstracts away from the medium and focuses only on state changes, on and off.

So, computation is medium independent by definition and then the AI researchers say well consciousness too, but no, consciousness is not something we invented. It's not a theoretical abstraction a theoretical concept, it's the thing we are before we begin to theorize. It precedes theory, you are not free to just define consciousness the way you want, I mean you can do that, but then you are playing your own game in your own private world like a wild potato underground as the B-52s used to say. Consciousness the thing most people refer to is nature is given it's something that precedes theory, and it is not medium independent unless you redefine it arbitrarily and create your own language. We are not at liberty to think of consciousness as independent of the medium, and by consciousness here I mean dissociated private conscious in the life of the type you and I have. We are not at liberty to separate that from its medium, because nature is telling us it seems to happen only in a certain medium, namely biology: warm, moist, carbon-based organisms that metabolize. But AI people conflate computation with consciousness, and they think they can give birth to a privately conscious being made of silicon computers. Freud used to talk of penis envy, which is the envy women have of men, because men have an extra part to their bodies. I like to call this phenomenon in the AI Community “womb envy” because this is the envy the men have of the capacity of women to give birth to privately conscious entities in nature. So, they try to make up for it, by conflating computation with consciousness and indulging in entirely arbitrary fantasy.

Now, let me try to drive home to you why I think this is pure fantasy. I can run a simulation of kidney function in my home computer. A simulation accurate to the molecular level. I can simulate how kidneys work accurately down to the molecular level on my computer at home. Does that mean that my computer will urinate on my desk? Of course not! Because a simulation is not the same as the thing simulated, and we all understand that if it comes to pee, or if it comes to anything else, but when it comes to consciousness, because it is such a discombobulating mystery under the arbitrary assumptions of materialism, we don't have that intuition, and we think that if a silicon computer simulates the patterns of information flow in the human brain, then the computer will be conscious. Now, I submit to you this is as absurd as to think that because I simulate kidney function on my computer, my computer will pee on my desk. It's as arbitrary and nonsensical a thought step as the the simulation of kidney function but people don't see that.
[yid]https://www.youtube.com/watch?v=5YYpS4FXmz8[/yid]

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 20th, 2022, 7:29 am
by Moreno
Leontiskos wrote: December 19th, 2022, 5:24 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
But they don't just program the computers behavior. They are allowing for learning, which means that new behaviors and heuristics can be arrived about by the AI's. This may not entail that it is conscious or self-aware, but as far as its behavior and potential dangers AI presents, this does not matter. Just as it could adjust the electric grid, say, for out benefit, making it more efficient, it could 'decide' to have different goals, even as a non-conscious learning thingie. They can develop their own goals given that AI creators are modeling AIs on neural networks that are extremely flexible. Rather than the now primitive, If X, then do Y type programming.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 20th, 2022, 1:16 pm
by Leontiskos
Moreno wrote: December 20th, 2022, 7:29 am
Leontiskos wrote: December 19th, 2022, 5:24 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
But they don't just program the computers behavior. They are allowing for learning, which means that new behaviors and heuristics can be arrived about by the AI's. This may not entail that it is conscious or self-aware, but as far as its behavior and potential dangers AI presents, this does not matter. Just as it could adjust the electric grid, say, for out benefit, making it more efficient, it could 'decide' to have different goals, even as a non-conscious learning thingie. They can develop their own goals given that AI creators are modeling AIs on neural networks that are extremely flexible. Rather than the now primitive, If X, then do Y type programming.
I am a computer scientist by trade. You should be putting many more words than "decision" in scare quotes. You should also be using scare quotes with words like "learning", "develop their own goals," etc. All of the "learning" and "decisions" are predetermined by the code. It doesn't matter that the code generates second-order behavior; it is still deterministically derived from the code and completely different from true intelligence. Like any program, any inputs that the AI receives should be anticipated and accounted for by the programmer. That a programmer does not fully understand the code he writes does not mean that his program is sentient.
Everything I said in my first post holds.