Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: October 11th, 2024, 6:44 pm
To reduce confusion and make the discussion more readable, let’s boil things down.
Side issue 1) You claim that the Earth is unimportant. It would seem that, according to your definition, all planets are unimportant, which makes no sense.
Earth is important because it is important to us. We are important, people are important, animals are important, plants are important – to ourselves, at least. Since there appears to be no one else around in this part of the cosmos, our views matter most.
Side issue 2) You claim that geology does not evolve, only biology, as per the textbooks. "Evolution" was defined at a time when scientists did not know what we know today about the connections between geology and biology. That's why the field of geobiology was created. There was an entire evolution of Earth's chemistry that made abiogenesis possible.
The question of whether the technical meaning of evolution needs to be expanded to better describe what nature is really like could be a topic in itself.
----
Main issue: You claim that the idea of self-replicating, self-improving machines (SRSIMs) is simply science fiction, and unworthy of consideration.
However, self-replicating AIs have already been developed, and self-improving AI is considered by serious observers to be not just a possibility, but as much an existential risk.
The idea that AI research will not produce SRSIMs in, say, the next thousand years only makes sense if you believe human societies will soon no longer exist, that we are at The End of Days.
If we are not on the verge of global nuclear holocaust, then in the next million years, the advancement of AI will be at least as far beyond our comprehension as the internet would be beyond a Neanderthal’s comprehension.
It would take a brave philosopher to claim that AI development over a million or billion years would not generate a new kind of sentience.
Again, you disregard deep time. I suppose that's because it’s hard to predict so far ahead and one cannot be sure about anything. Yet you are confident that, over deep time, AI cannot possibly develop any kind of sentience. Why would AI, over deep time, never take advantage of the obvious utility of sentience? It's not a matter of teleology, as you imply, but logic. Sentience is obviously useful. If it wasn't, it would not have become so widespread.
To be fair, AI might (rightly) assess that sentience is the source of suffering, and decline in the spirit of Benetar. However, it might not be in control. As AI complexifies, there will surely be unexpected emergences.
One would expect that, if not sentience, AI would evolve some kind of equivalent. As Lagaya suggested, if a form of sentience is useful to future AI's operations, then it will emerge through competition.
The merging of biology and technology is another potential pathway towards AI.
Side issue 1) You claim that the Earth is unimportant. It would seem that, according to your definition, all planets are unimportant, which makes no sense.
Earth is important because it is important to us. We are important, people are important, animals are important, plants are important – to ourselves, at least. Since there appears to be no one else around in this part of the cosmos, our views matter most.
Side issue 2) You claim that geology does not evolve, only biology, as per the textbooks. "Evolution" was defined at a time when scientists did not know what we know today about the connections between geology and biology. That's why the field of geobiology was created. There was an entire evolution of Earth's chemistry that made abiogenesis possible.
The question of whether the technical meaning of evolution needs to be expanded to better describe what nature is really like could be a topic in itself.
----
Main issue: You claim that the idea of self-replicating, self-improving machines (SRSIMs) is simply science fiction, and unworthy of consideration.
However, self-replicating AIs have already been developed, and self-improving AI is considered by serious observers to be not just a possibility, but as much an existential risk.
The idea that AI research will not produce SRSIMs in, say, the next thousand years only makes sense if you believe human societies will soon no longer exist, that we are at The End of Days.
If we are not on the verge of global nuclear holocaust, then in the next million years, the advancement of AI will be at least as far beyond our comprehension as the internet would be beyond a Neanderthal’s comprehension.
It would take a brave philosopher to claim that AI development over a million or billion years would not generate a new kind of sentience.
Again, you disregard deep time. I suppose that's because it’s hard to predict so far ahead and one cannot be sure about anything. Yet you are confident that, over deep time, AI cannot possibly develop any kind of sentience. Why would AI, over deep time, never take advantage of the obvious utility of sentience? It's not a matter of teleology, as you imply, but logic. Sentience is obviously useful. If it wasn't, it would not have become so widespread.
To be fair, AI might (rightly) assess that sentience is the source of suffering, and decline in the spirit of Benetar. However, it might not be in control. As AI complexifies, there will surely be unexpected emergences.
One would expect that, if not sentience, AI would evolve some kind of equivalent. As Lagaya suggested, if a form of sentience is useful to future AI's operations, then it will emerge through competition.
The merging of biology and technology is another potential pathway towards AI.