Hi Gray Area
Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
As we don't know what the necessary and sufficient conditions for conscious experience are (ie we don't understand the mind-body relationship, and don't even know how we could go about understanding it), we don't know if AI is possible.
But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?
Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.
Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?
We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.
How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
I too don't know how to predict a self-learning/unprogrammed AI's nature, what it would be like to be such a being. Or if our notions of altruism, self, will or anything else would be close to what it's like to be an AI. We're at the stage of trying to build one and seeing what happens. I don't equate intelligence with altruism tho. Human altruism results from a specific evolutionary history as social mammals, if something similar isn't programmed in, I wouldn't expect it to naturally pop up via increasingly complex programming processes. Or what 'good' might mean to such a critter - maybe the satisfaction of more information stimulation would be what it values, or tasty electricity, who knows. And it might have no way of empathetically understanding what we value.
Basically if we create something more intelligent than us with agency we can't control, sci fi tells us
don't give it legs and keep the off button handy till you know what you're dealing with!
Ideally we'd learn to live and work together for mutual benefit, realising that as sentient creatures they'd not just be our slaves. But in a capitalist world where Zuckerberg and Musk types will be largely controlling the way we proceed based on commercial exploitation, it's not very reassuring.