GrayArea wrote: ↑December 18th, 2022, 8:32 am Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Leontiskos wrote: ↑December 19th, 2022, 5:24 pm Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
Moreno wrote: ↑December 20th, 2022, 7:29 am But they don't just program the computers behavior. They are allowing for learning, which means that new behaviors and heuristics can be arrived about by the AI's. This may not entail that it is conscious or self-aware, but as far as its behavior and potential dangers AI presents, this does not matter. Just as it could adjust the electric grid, say, for out benefit, making it more efficient, it could 'decide' to have different goals, even as a non-conscious learning thingie. They can develop their own goals given that AI creators are modeling AIs on neural networks that are extremely flexible. Rather than the now primitive, If X, then do Y type programming.
The discussion you are having is an important aspect of this topic, to be sure. But I think we could (should?) also consider the nature of non-artificial "intelligence". Ours is based on a biological platform, not a silicon one, and its evolution quite different from a programming project, but perhaps the results are similar, or even the same? Would it be acceptable or useful, do we think, to consider our own 'programming' — i.e. intelligence — in this light?
"Who cares, wins"