Page 6 of 7

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 8th, 2024, 3:17 am
by Lagayscienza
Count Lucanor wrote: October 7th, 2024, 9:52 am
Lagayscienza wrote: October 7th, 2024, 5:28 am What is life? There are lots of definitions of life but if, as is often the case, life is defined as some combination of energy use, growth, reproduction, response to stimuli, complex goal directed behaviours and adaptation to the environment originating from within an organism, then what prevents us from saying that machines which exhibit these characteristics are life forms?
Nothing would prevent us, right, since it would show no essential difference with life as we know it.
Lagayscienza wrote: October 7th, 2024, 5:28 am
And why could these non-organic "life" forms not become as intelligent or even vastly more intelligent than us? The fact that something hasn’t yet happened or been done is no guarantee that it can’t happen or be done.
Sure, but who says it can’t happen? Not me. I’m saying we haven’t seen non-organic life yet, nor anything of that sort in the path of becoming that other type of life, much less intelligent. That view stands against the view of AI enthusiasts, who think we are already somewhere in that path, even though some of them will say that we’re at the beginning and there’s still a long way to go. But you won’t get from electronic circuits more than simulations of life, agency and intelligence, every time under the control of humans.
Neurons are electronic circuits of a sort. Does the fact that neurons are made of meat instead of metal make a lot of difference?

I am not an AI enthusiast, although I do find the idea of intelligent machines interesting. However, I doubt we will see truly intelligent autonomous, self-replicating and self-improving machines in my life time, and perhaps not even in my grandkids' lifetimes. But I believe it could happen eventually. If blind, undirected evolution can come up with an intelligence like ours over 4 billion years, I wonder what could eventually be achieved, and how quickly, with the application of intelligence, reason and science.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 8th, 2024, 4:02 am
by Sy Borg
Count Lucanor wrote: October 7th, 2024, 2:00 am
Sy Borg wrote: October 6th, 2024, 4:31 pm
Count Lucanor wrote: October 6th, 2024, 2:37 pm
Sy Borg wrote: October 4th, 2024, 4:48 pm To start, I must object to your referring to Earth as a "ridiculously unimportant planet". That's like saying that the brain is an unimportant organ. I will assume it's just a moment of Douglas Adams-esque whimsy. Of course, the Earth is truly extraordinary. Every other known world is either relatively inert and barren or smothered in radioactive or toxic gas.

Earth is still evolving, obviously. It would be bizarre to imagine humans as the most sentient/sapient possible entity, when we are still largely chaotic apes. Meanwhile, life propagates and spreads out, and it is looking for a way to spread out from Earth to other worlds.

In the future, autonomous self-improving machines will be sent to other worlds, where they will use the local raw materials to develop. What might that look like in a hundred million years' time? Sentience, or perhaps mentalities far more advanced than we can imagine could evolve.

Humans tend to underestimate how much can happen in deep time due to our short life spans, hence the existence of evolution denial. Evolution seems like magic to deniers because they cannot viscerally imagine the weight of years over deep time. It took biology over a billion years to grow even a rudimentary brain.

I'm not saying that we are likely to see sentient machines in our lifetimes. We might, but I doubt it for basically the same reasons as your ABCD logic above.

As a side note, limiting the term "life" to just biology is more biocentric than logical, which is why the field of geobiology had to be developed, not to mention the unresolved status of viruses an prions.
From the human point of view, Earth is extraordinary, surely. Importance of things is, anyway, necessarily a human construct, which is fine. But humans can take a broader look and see their place in the universe to reach the humble conclusion that our planet is a small speck of dust in a huge universe. By all our standards of "importance", such as the effect we can produce on the rest of universe, we are nothing. Our exceptionality (life, intelligence) can be placed alongside the exceptionality of other worlds.

I leave the speculations about the future to futurologists. Concerned with what we have in front of our eyes, and instructed by reason, there's nothing suggesting life and intelligence have been replicated in an elementary form. Simulations are just simulations.
The Earth is, as far as we know, the only place for many trillions of miles with any sentience. In a sense, though, the Sun IS the solar system, comprising 99.98% of its mass, making the planets, including Earth, rather like chunks of the Sun's extended atmosphere.

Likewise, the last few years of rapid AI advancement is a mere blink in evolutionary time. To judge AI based on its current form is akin to assuming that a human blastocyst in a pregnant woman will never develop into anything more sophisticated.
The Earth could be wiped out tomorrow by an asteroid or some other cataclysm, making life as we know it disappear. That would have no effect beyond the solar system and the cosmos will continue doing business as usual.
It would have a massive effect. It would remove sentience from the solar system. No, we won't be expecting the planet Uranus, for example, to be upset about the loss of sentience but this is one system, and we are part of the system's sentient portion.



Count Lucanor wrote: October 7th, 2024, 2:00 am In assessing the possibilities of AI, we should separate the technology as it is and what futurologists imply with the label "artificial intelligence". So, let's put aside that label for a moment: we have a technology that, as all mechanistic technologies, is instrumental for humans achieving much better performance in tasks than what they could do with their own hands or intellect. We managed to travel faster with the railroad, automobiles, etc. We devised ways to travel through the air, sea, etc. None of those technologies actually emulated our capacities to walk, run, swim, neither the fly of birds or insects, and when we tried flapping wings, we failed ridiculously. No one would call the railroad or automobile technology "artificial walking or running", or "artificial bird-flying" to airplanes. They are something else that achieves what we could not achieve by other non-technological means. So, there's no doubt that that thing installed in computers and other electronic devices is doing remarkably well, no doubt that it outperforms humans' natural abilities, as all technologies have done in the past, and no doubt that, in the hands of humans, it will likely achieve more in the future. But that's not the issue as presented by futurologists, AI enthusiasts, etc. The issue is what Lagayscienza explained in his last post:
The question then is whether intelligence can be housed in a non-biological substrate - that is, by building it into autonomous, self replicating, self-improving machines and skipping abiogenesis and evolution by natural selection. I can imagine machines with onboard 3D printers that can copy themselves and insert a copy of their "genomes", their blueprint, into those copies. Those copies could repeat the process. Such machines could harvest local free energy to power the process.
Lagaya's question is pertinent. There's no known reason why sentience can exist in a non-biological substrate. Just that, so far, it's not happened. What Lagaya is talking about looks to be achievable this century. What could develop in millions of years? It would take a brave philosopher to predict that sentience re-evolving over deep time is impossible.
As everyone knows, I categorically deny that any of those attributes is present in any current technology, not even in a basic, primitive form. Furthermore, I completely reject the idea that any current technology is moving towards achieving that goal in the future, which is why there's not much reason to predict that it will evolve into something that lies in that path. Just like the flapping wings would have never achieved "bird-flying", algorithms in electronic devices are not in the path of producing autonomous, conscious beings, imbued with volition and social mechanisms of interaction that can control the domain of human culture and even replace humans. That's just bonkers from sci-fi books.
[/quote]

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 8th, 2024, 9:34 am
by Count Lucanor
Lagayscienza wrote: October 8th, 2024, 3:17 am
Count Lucanor wrote: October 7th, 2024, 9:52 am
Lagayscienza wrote: October 7th, 2024, 5:28 am What is life? There are lots of definitions of life but if, as is often the case, life is defined as some combination of energy use, growth, reproduction, response to stimuli, complex goal directed behaviours and adaptation to the environment originating from within an organism, then what prevents us from saying that machines which exhibit these characteristics are life forms?
Nothing would prevent us, right, since it would show no essential difference with life as we know it.
Lagayscienza wrote: October 7th, 2024, 5:28 am
And why could these non-organic "life" forms not become as intelligent or even vastly more intelligent than us? The fact that something hasn’t yet happened or been done is no guarantee that it can’t happen or be done.
Sure, but who says it can’t happen? Not me. I’m saying we haven’t seen non-organic life yet, nor anything of that sort in the path of becoming that other type of life, much less intelligent. That view stands against the view of AI enthusiasts, who think we are already somewhere in that path, even though some of them will say that we’re at the beginning and there’s still a long way to go. But you won’t get from electronic circuits more than simulations of life, agency and intelligence, every time under the control of humans.
Neurons are electronic circuits of a sort. Does the fact that neurons are made of meat instead of metal make a lot of difference?
Electronic circuits are utility grids of a sort. That doesn't make the likelihood of one functioning like the other a reasonable bet.
Lagayscienza wrote: October 8th, 2024, 3:17 am I am not an AI enthusiast, although I do find the idea of intelligent machines interesting. However, I doubt we will see truly intelligent autonomous, self-replicating and self-improving machines in my life time, and perhaps not even in my grandkids' lifetimes. But I believe it could happen eventually. If blind, undirected evolution can come up with an intelligence like ours over 4 billion years, I wonder what could eventually be achieved, and how quickly, with the application of intelligence, reason and science.
The more distant the future, less the chances of guessing correctly what humankind will achieve. A citizen from the Roman Empire could not have imagined the possibility of telecommunications, but more than that, thinking that telecommunications was predetermined to appear some time in history is a tricky device of human imagination, not taking into account the contingencies of culture and even natural history. So, yes, many things which are not technically possible today, such as teletransportation, could happen in the future. But no one knows and there's no sign today of that being achievable. Whoever plays today with the idea as a real possibility can rightly be called a "teletransportation enthusiast". The same with the enthusiasm for AI.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 8th, 2024, 10:25 am
by Count Lucanor
Sy Borg wrote: October 8th, 2024, 4:02 am
Count Lucanor wrote: October 7th, 2024, 2:00 am
The Earth could be wiped out tomorrow by an asteroid or some other cataclysm, making life as we know it disappear. That would have no effect beyond the solar system and the cosmos will continue doing business as usual.
It would have a massive effect. It would remove sentience from the solar system. No, we won't be expecting the planet Uranus, for example, to be upset about the loss of sentience but this is one system, and we are part of the system's sentient portion.
There are, roughly speaking, at least 200,000,000,000 galaxies in the observable universe, each one measuring several hundreds of thousands of light years in diameter, separated by even larger distances, and containing each one around 200,000,000,000 stars. Our sun is one of those stars in one of those galaxies, but yeah, what happens on Earth "would have a massive effect". Our closest star from the sun is Alpha Centauri, 4.2 light years away, which means that with our current fastest technology we could reach it in around 8,000 years, and come back in another 8,000 years. That's how isolated we are and how ridiculously unimportant.
Sy Borg wrote: October 8th, 2024, 4:02 am Lagaya's question is pertinent. There's no known reason why sentience can exist in a non-biological substrate. Just that, so far, it's not happened.
You're right, there's no known reason why sentience would exist in a non-biological substrate, that's what I've been saying. That it hasn't happened does not allow us to raise the stake for the chances of happening.
Sy Borg wrote: October 8th, 2024, 4:02 am What Lagaya is talking about looks to be achievable this century.
There's nothing concrete today pointing to chances of achieving it ever. We could make estimations if we already had some technology at hand, as primitive as it could be, but the fact is that we have nothing, zero, none. That thing called AI is not that primitive technology, just like the flapping wings would have never achieved "bird-flying". Algorithms in electronic devices are not in the path of producing autonomous, conscious beings, imbued with volition and social mechanisms of interaction that can control the domain of human culture and even replace humans. That's just bonkers from sci-fi books.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 8th, 2024, 11:23 am
by The Beast
My favorite food is an aged prime rib steak cooked medium (faded pink) covered in a crust of spices of a sentient cow. I am frowning at the idea of a sentient artificial cow. How would you cook it? However, I see great potential in the idea/thought of an intelligent artificial cow inside a herd of sentient cows. I am open to debate on the possibility that jumping into the neighbors' greener pastures is a matter of intelligence (finding a way).

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 8th, 2024, 10:25 pm
by Sy Borg
Count Lucanor wrote: October 8th, 2024, 10:25 am
Sy Borg wrote: October 8th, 2024, 4:02 am
Count Lucanor wrote: October 7th, 2024, 2:00 am
The Earth could be wiped out tomorrow by an asteroid or some other cataclysm, making life as we know it disappear. That would have no effect beyond the solar system and the cosmos will continue doing business as usual.
It would have a massive effect. It would remove sentience from the solar system. No, we won't be expecting the planet Uranus, for example, to be upset about the loss of sentience but this is one system, and we are part of the system's sentient portion.
There are, roughly speaking, at least 200,000,000,000 galaxies in the observable universe, each one measuring several hundreds of thousands of light years in diameter, separated by even larger distances, and containing each one around 200,000,000,000 stars. Our sun is one of those stars in one of those galaxies, but yeah, what happens on Earth "would have a massive effect". Our closest star from the sun is Alpha Centauri, 4.2 light years away, which means that with our current fastest technology we could reach it in around 8,000 years, and come back in another 8,000 years. That's how isolated we are and how ridiculously unimportant.
As far as we know, none of those 200,000,000,000 stars have spawned sentient life. That means that life on Earth is not only important, but critical to a universe that's not just plasma, gravity wells, rocks, gas, and radiation.

It's fashionable to dismiss life on Earth as a trivial anomaly in the greater scheme of things. It seems like a sophisticated and broad-minded approach, but it is ultimately dismissive of the most complex and interesting things that the cosmos has produced.

Count Lucanor wrote: October 8th, 2024, 10:25 am
Sy Borg wrote: October 8th, 2024, 4:02 am Lagaya's question is pertinent. There's no known reason why sentience can exist in a non-biological substrate. Just that, so far, it's not happened.
You're right, there's no known reason why sentience would exist in a non-biological substrate, that's what I've been saying. That it hasn't happened does not allow us to raise the stake for the chances of happening.
If there's no reason why not, then the chances are that it will happen. That's the direction that life on Earth is heading.
Count Lucanor wrote: October 8th, 2024, 10:25 am
Sy Borg wrote: October 8th, 2024, 4:02 am What Lagaya is talking about looks to be achievable this century.
There's nothing concrete today pointing to chances of achieving it ever. We could make estimations if we already had some technology at hand, as primitive as it could be, but the fact is that we have nothing, zero, none. That thing called AI is not that primitive technology, just like the flapping wings would have never achieved "bird-flying". Algorithms in electronic devices are not in the path of producing autonomous, conscious beings, imbued with volition and social mechanisms of interaction that can control the domain of human culture and even replace humans. That's just bonkers from sci-fi books.
A thousand years ago, the internet was not only considered impossible, the possibility was impossible to anticipate. Your reply seems to be looking at the situation through human lifespans, which is one way of making certain that you will be wrong.

When life emerged it was based on simple rules. Could LUCA or its predecessors (which certainly would have existed but did not survive long-term) have been considered the starting point to today's biosphere, if we didn't know what we know? It would be unthinkable. Your argument reminds me of those of creationists who claim that evolution is impossible. Like you, they think in terms of human life spans and profoundly underestimate deep time.

Deep time, like logarithms, is not intuitive. It is so divorced from our experiences that we can only consider it in the abstract, eg. x rock is y years old, x star is z years old.

Unlike sci-fi stories, I doubt that AI will replace humans (hmm, come to think of it, has there ever been a sci-fi story where humans were completely replaced by AI?). Whatever, humans look likely to be out-competed by technologically-enhanced humans, cyborgs, just as H. sapiens out-competed other hominids.

AI, however, has the potential to outlast all biological life on Earth, which will be erased as the Sun heats up. In a billion years' time, the oceans are expected to boil away, but Earth will be uninhabitable long before that time. It all depends on whether self-improving and self-replicating AI can be created and distributed to other world before a catastrophe like global nuclear war or a "planet-killer" asteroid strikes.

Once self-improving and self-replicating AI is set in motion, evolution will do the rest. There is no guarantee that these entities will become sentient, not as we know it, but there will probably be a different kind of sentience, one that favours persistence.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 9th, 2024, 4:17 pm
by Count Lucanor
Sy Borg wrote: October 8th, 2024, 10:25 pm
As far as we know, none of those 200,000,000,000 stars have spawned sentient life. That means that life on Earth is not only important, but critical to a universe that's not just plasma, gravity wells, rocks, gas, and radiation.

It's fashionable to dismiss life on Earth as a trivial anomaly in the greater scheme of things. It seems like a sophisticated and broad-minded approach, but it is ultimately dismissive of the most complex and interesting things that the cosmos has produced.
The 0.0000001% of anything is insignificant for the 99.999999% left, except only for that 0.0000001% . Don't get me wrong, when I was being accused of anthropocentrism because of the exceptionalism of human life, I was pretty much aware of what such exceptionalism implies from a human point of view, even though the idea was dismissed as a triviality in a greater scheme.
Sy Borg wrote: October 8th, 2024, 10:25 pm
If there's no reason why not, then the chances are that it will happen.
Something not happening does not increase the chances of ever happening. When talking about technology, we are not merely constrained by chances, but by actual technical feasibility.
Sy Borg wrote: October 8th, 2024, 10:25 pm A thousand years ago, the internet was not only considered impossible, the possibility was impossible to anticipate. Your reply seems to be looking at the situation through human lifespans, which is one way of making certain that you will be wrong.
The trick of the mind is to think that internet was predestined to exist. It didn't appear as the inevitable expression of a greater scheme that unfolds in human or natural history.
Sy Borg wrote: October 8th, 2024, 10:25 pm Your argument reminds me of those of creationists who claim that evolution is impossible. Like you, they think in terms of human life spans and profoundly underestimate deep time.
Evolution is a process for which we have seen more than enough evidence of happening. We don't need guessing and theorizing about its possibility. But thinking of it teleologically, as something that was predetermined to exist following some inherent necessity of the universe, a primal cause, is certainly a mistake.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 9th, 2024, 7:03 pm
by Sy Borg
Count Lucanor wrote: October 9th, 2024, 4:17 pm
Sy Borg wrote: October 8th, 2024, 10:25 pm
As far as we know, none of those 200,000,000,000 stars have spawned sentient life. That means that life on Earth is not only important, but critical to a universe that's not just plasma, gravity wells, rocks, gas, and radiation.

It's fashionable to dismiss life on Earth as a trivial anomaly in the greater scheme of things. It seems like a sophisticated and broad-minded approach, but it is ultimately dismissive of the most complex and interesting things that the cosmos has produced.
The 0.0000001% of anything is insignificant for the 99.999999% left, except only for that 0.0000001% . Don't get me wrong, when I was being accused of anthropocentrism because of the exceptionalism of human life, I was pretty much aware of what such exceptionalism implies from a human point of view, even though the idea was dismissed as a triviality in a greater scheme.
Percentages are only one factor. The ventrolateral frontal cortex weight perhaps 40-60 grams. That's less than a thousandth of total human mass, yet at that scale it is critical to our distinctly human consciousness.

But it's not just human life. Imagine the thrill if we found simple tube worms around hydrothermal vents on Europa. Or even bacteria. The Earth's exceptional qualities extend far beyond just humans. Earth is so much more alive than other worlds around us that there is no competition.


Count Lucanor wrote: October 9th, 2024, 4:17 pm
Sy Borg wrote: October 8th, 2024, 10:25 pm
If there's no reason why not, then the chances are that it will happen.
Something not happening does not increase the chances of ever happening. When talking about technology, we are not merely constrained by chances, but by actual technical feasibility.
Again, you are thinking in terms of today whereas I am thinking in terms of deep time. Today means nothing in context, akin to judging a baby's potentials based on its current achievements.

Count Lucanor wrote: October 9th, 2024, 4:17 pm
Sy Borg wrote: October 8th, 2024, 10:25 pm A thousand years ago, the internet was not only considered impossible, the possibility was impossible to anticipate. Your reply seems to be looking at the situation through human life spans, which is one way of making certain that you will be wrong.
The trick of the mind is to think that internet was predestined to exist. It didn't appear as the inevitable expression of a greater scheme that unfolds in human or natural history.
Predestination is not the point, and also not my claim. I'm not claiming to know what will happen in the future, but the idea that humans as they stand are the ultimate expression of sentience - that no greater sentience is possible than humans - strikes me as absurd, given that we are still just chaotic apes with a gift for inventiveness. To think that evolution stops with us makes no sense to me. Chances are that evolution will continue and, given the usefulness of sentience, it's hard to imagine self-replicating, self-improving machines never achieving sentience - not in a thousand years, not in a million years, not in a billion years.

Count Lucanor wrote: October 9th, 2024, 4:17 pm
Sy Borg wrote: October 8th, 2024, 10:25 pm Your argument reminds me of those of creationists who claim that evolution is impossible. Like you, they think in terms of human life spans and profoundly underestimate deep time.
Evolution is a process for which we have seen more than enough evidence of happening. We don't need guessing and theorizing about its possibility. But thinking of it teleologically, as something that was predetermined to exist following some inherent necessity of the universe, a primal cause, is certainly a mistake.
No need for teleology. That's a red herring. The necessity is not that of the universe but of the subjects. Either sentience is a useful adaptation for highly intelligent entities or it is a manifestation of God. Take your pick.

Natural selection is not accidental. If an adaptation is potent, then it will continue to be selected. Vision is a good example. Early on, all life was blind. Over time, eyes have evolved about forty times, making clear how useful it is to be able to detect light. Sentience too has proved useful, with a multitude of sentient animals. Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.

Presumably, neither of these limits will apply to self-replicating, self-improving machines (SRSIMs to save my arthritic fingers). What we don't know is how machine sapience can produce sentience.

I get it. I don't think machines are going to start becoming emotional, or replicating biological feelings. The sentience I'm referring to is not what we feel, or our dogs feel. More likely that some equivalent meta-awareness will emerge because that's how nature works. Certain thresholds are reached, something breaks, and that results in the emergence of new features. If the features aid survival, they are called adaptations.

In brief, I do not belive that SRSIMs will always be "black inside" like today's machines, that it will feel like something to be a SRSIM in the future.

Likewise, once biology was (presumably) "black inside", operating like machines but that changed over deep time.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 10th, 2024, 9:55 am
by The Beast
In the philosophy of Hegel, a sentient independent Android using resources for its own agenda might fit the description of “unessential or a negatively characterized object”. However, it is in its/their regime (Android) the possibility of 24/7 skilled models training in conditions not suitable to humans, therefore Androids suited to survive. As Hegel explains Being and Nothing I find meaning in the concept of “persistence” IMO it will take the form of a model at the base of being an Android. The human heart is the inspiration of a persistence model in the Android and subject to evolution as well. At the core is whether persistence is an intelligence concept or a sentiency concept or both. If both, then it is “something” IMO, it is a variable of something in a very speculative Human/Android thesis.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 10th, 2024, 4:12 pm
by Count Lucanor
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm The 0.0000001% of anything is insignificant for the 99.999999% left, except only for that 0.0000001% . Don't get me wrong, when I was being accused of anthropocentrism because of the exceptionalism of human life, I was pretty much aware of what such exceptionalism implies from a human point of view, even though the idea was dismissed as a triviality in a greater scheme.
Percentages are only one factor. The ventrolateral frontal cortex weight perhaps 40-60 grams. That's less than a thousandth of total human mass, yet at that scale it is critical to our distinctly human consciousness.
Percentages just show the differences in magnitudes, it's about scale, the radius of influence. That is key for something to have an effect on something else. Can we say that the ventrolateral frontal cortex of my neighbor has any influence on the ventrolateral frontal cortex of a retired man in the Swiss Alps? Definitely not. Now, just imagine trillions of ventrolateral frontal cortices separated by the same distances. No matter how special my neighbor's cortex might be, it is completely irrelevant in the larger context.
Sy Borg wrote: October 9th, 2024, 7:03 pm But it's not just human life. Imagine the thrill if we found simple tube worms around hydrothermal vents on Europa. Or even bacteria. The Earth's exceptional qualities extend far beyond just humans. Earth is so much more alive than other worlds around us that there is no competition.
But human life is just part of life in general, which is constrained to Earth anyway, dependent of its capacity to host those organic processes. That capacity does not extend, obviously, beyond Earth, where its life-harboring qualities are lost.
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm Something not happening does not increase the chances of ever happening. When talking about technology, we are not merely constrained by chances, but by actual technical feasibility.
Again, you are thinking in terms of today whereas I am thinking in terms of deep time. Today means nothing in context, akin to judging a baby's potentials based on its current achievements.
The analogy does not apply. Potentiality is not so simply reducible to time frames. A baby is just a human in a given stage of development, their potential as a human individual is entirely determined by their innate capacities plus their behavior and contingencies of the environment. In other words, a man can be thought of as a system put in motion with initial conditions and then, as multiple interactions take place, conditions change and then so many things can happen in the future that we can say it is undetermined. But we can surely define limits based only on the initial conditions and the following experiences: we know that his legs will not allow him to run as fast as a cheetah and that he will never grow to a height of 10 meters, nor he will see with his eyes like the JWS telescope, nor he will be in two places at the same time, etc. There are variables, but they are not infinite, without limits. So, the argument "anything is possible, given enough time" is false. You can see the potential of things, considering their limits, and then make some reasonable predictions. We don't see that in those who predict that AI technology will develop into something that resembles life (as per Lagayscienza's definition: "some combination of energy use, growth, reproduction, response to stimuli, complex goal directed behaviours and adaptation to the environment originating from within...") in autonomous, independent, conscious beings, imbued with volition and social mechanisms of interaction, constituting their own social domain, so that they would control the domain of human culture and even replace humans. Nothing in the initial conditions and inherent capabilities of what they call AI technology, including robotics, is evident for that future possibility. A technology might appear tomorrow and then we would be able to say it's reasonable to expect such a future outcome, but until then, all we have is the wishful thinking and enthusiasm of the sci-fi industry and futurologists.
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm
The trick of the mind is to think that internet was predestined to exist. It didn't appear as the inevitable expression of a greater scheme that unfolds in human or natural history.
Predestination is not the point, and also not my claim. I'm not claiming to know what will happen in the future, but the idea that humans as they stand are the ultimate expression of sentience - that no greater sentience is possible than humans - strikes me as absurd, given that we are still just chaotic apes with a gift for inventiveness. To think that evolution stops with us makes no sense to me. Chances are that evolution will continue and, given the usefulness of sentience, it's hard to imagine self-replicating, self-improving machines never achieving sentience - not in a thousand years, not in a million years, not in a billion years.
Since we have no other reference for life, sentience, intelligence, agency and social power derived from the behavior of organic matter, we cannot just make up new ones out of the blue and try to predict anything. The belief that mere computational power could achieve any of these things, strikes me as absurd, too. It's like expecting that given enough time and resources, it is possible that the machine in the Chinese Room experiment can eventually understand Chinese, decide to break out and lead a revolution to topple all the world's governments. So absurd it looks right now.
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm
Sy Borg wrote: October 8th, 2024, 10:25 pm Your argument reminds me of those of creationists who claim that evolution is impossible. Like you, they think in terms of human life spans and profoundly underestimate deep time.
Evolution is a process for which we have seen more than enough evidence of happening. We don't need guessing and theorizing about its possibility. But thinking of it teleologically, as something that was predetermined to exist following some inherent necessity of the universe, a primal cause, is certainly a mistake.
No need for teleology. That's a red herring. The necessity is not that of the universe but of the subjects. Either sentience is a useful adaptation for highly intelligent entities or it is a manifestation of God. Take your pick.
The issue is whether we can assess possibilities based on real world scenarios or mere gambling speculations. My point is that, unlike Creationists, I base mine on the actual evidence at hand, which shows purposeless, contingent processes. If that stance is countered with the idea that nature is purpose-driven, so that what has not happened yet, eventually will happen, moved by that higher purpose, I call that teleology, notwithstanding the theological implications.
Sy Borg wrote: October 9th, 2024, 7:03 pm Natural selection is not accidental. If an adaptation is potent, then it will continue to be selected. Vision is a good example. Early on, all life was blind. Over time, eyes have evolved about forty times, making clear how useful it is to be able to detect light. Sentience too has proved useful, with a multitude of sentient animals. Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.
I can argue that natural selection is the product of contingencies, so if we started all over again, life would not have turned out exactly the same, perhaps far from it. But that's beyond the point: natural selection exists. Not as a fundamental force, with permanent presence transcending organic matter.
Sy Borg wrote: October 9th, 2024, 7:03 pm Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.
Presumably, neither of these limits will apply to self-replicating, self-improving machines (SRSIMs to save my arthritic fingers). What we don't know is how machine sapience can produce sentience.

I get it. I don't think machines are going to start becoming emotional, or replicating biological feelings. The sentience I'm referring to is not what we feel, or our dogs feel. More likely that some equivalent meta-awareness will emerge because that's how nature works. Certain thresholds are reached, something breaks, and that results in the emergence of new features. If the features aid survival, they are called adaptations.
But natural selection is a process of organic, living matter. Trying to extend its application to the world in general as if it were some fundamental force that keeps hitting inorganic matter to produce sentience again, even if it's a new non-organic sentience, is not justified by any evidence that we have available now. What we do have evidence for is machines doing nothing more than what humans decide to do for their own benefit. If we want to call that "nature working", fine, but necessarily channeled through the abilities of humans and not bypassed as an independent process by the "force" of natural selection.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 10th, 2024, 5:05 pm
by Sy Borg
Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm The 0.0000001% of anything is insignificant for the 99.999999% left, except only for that 0.0000001% . Don't get me wrong, when I was being accused of anthropocentrism because of the exceptionalism of human life, I was pretty much aware of what such exceptionalism implies from a human point of view, even though the idea was dismissed as a triviality in a greater scheme.
Percentages are only one factor. The ventrolateral frontal cortex weight perhaps 40-60 grams. That's less than a thousandth of total human mass, yet at that scale it is critical to our distinctly human consciousness.
Percentages just show the differences in magnitudes, it's about scale, the radius of influence. That is key for something to have an effect on something else. Can we say that the ventrolateral frontal cortex of my neighbor has any influence on the ventrolateral frontal cortex of a retired man in the Swiss Alps? Definitely not. Now, just imagine trillions of ventrolateral frontal cortices separated by the same distances. No matter how special my neighbor's cortex might be, it is completely irrelevant in the larger context.
Who is to say that the Earth won’t create impacts in the future? Does a zygote have as much influence as a human?
Barring nuclear bombs and asteroids, it’s clear that self-replicating and self-improving machines will be sent to distant worlds by the Earth. Even before they become sentient, they will have an impact.


Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm But it's not just human life. Imagine the thrill if we found simple tube worms around hydrothermal vents on Europa. Or even bacteria. The Earth's exceptional qualities extend far beyond just humans. Earth is so much more alive than other worlds around us that there is no competition.
But human life is just part of life in general, which is constrained to Earth anyway, dependent of its capacity to host those organic processes. That capacity does not extend, obviously, beyond Earth, where its life-harboring qualities are lost.

That’s why SRSIMs will be sent to other worlds. Musk might get some people on Mars but I have little faith that that world can host a viable long-term population. Biology and space don’t mix.


Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm Something not happening does not increase the chances of ever happening. When talking about technology, we are not merely constrained by chances, but by actual technical feasibility.
Again, you are thinking in terms of today whereas I am thinking in terms of deep time. Today means nothing in context, akin to judging a baby's potentials based on its current achievements.
The analogy does not apply. Potentiality is not so simply reducible to time frames. A baby is just a human in a given stage of development, their potential as a human individual is entirely determined by their innate capacities plus their behavior and contingencies of the environment. In other words, a man can be thought of as a system put in motion with initial conditions and then, as multiple interactions take place, conditions change and then so many things can happen in the future that we can say it is undetermined. But we can surely define limits based only on the initial conditions and the following experiences: we know that his legs will not allow him to run as fast as a cheetah and that he will never grow to a height of 10 meters, nor he will see with his eyes like the JWS telescope, nor he will be in two places at the same time, etc. There are variables, but they are not infinite, without limits. So, the argument "anything is possible, given enough time" is false. You can see the potential of things, considering their limits, and then make some reasonable predictions. We don't see that in those who predict that AI technology will develop into something that resembles life (as per Lagayscienza's definition: "some combination of energy use, growth, reproduction, response to stimuli, complex goal directed behaviours and adaptation to the environment originating from within...") in autonomous, independent, conscious beings, imbued with volition and social mechanisms of interaction, constituting their own social domain, so that they would control the domain of human culture and even replace humans. Nothing in the initial conditions and inherent capabilities of what they call AI technology, including robotics, is evident for that future possibility. A technology might appear tomorrow and then we would be able to say it's reasonable to expect such a future outcome, but until then, all we have is the wishful thinking and enthusiasm of the sci-fi industry and futurologists.
The claim is not, as you stated, "anything is possible, given enough time". My point is that change and evolution are certain. That is a very different claim to your above strawman.
If SRSIMS are imbued with a preference for survival (which would be necessary), and if they are driven by an energy source that requires replenishment, then they will either develop a form of sentience or they will simply stop working. Sentience provides flexibility.




Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm
The trick of the mind is to think that internet was predestined to exist. It didn't appear as the inevitable expression of a greater scheme that unfolds in human or natural history.
Predestination is not the point, and also not my claim. I'm not claiming to know what will happen in the future, but the idea that humans as they stand are the ultimate expression of sentience - that no greater sentience is possible than humans - strikes me as absurd, given that we are still just chaotic apes with a gift for inventiveness. To think that evolution stops with us makes no sense to me. Chances are that evolution will continue and, given the usefulness of sentience, it's hard to imagine self-replicating, self-improving machines never achieving sentience - not in a thousand years, not in a million years, not in a billion years.
Since we have no other reference for life, sentience, intelligence, agency and social power derived from the behavior of organic matter, we cannot just make up new ones out of the blue and try to predict anything. The belief that mere computational power could achieve any of these things, strikes me as absurd, too. It's like expecting that given enough time and resources, it is possible that the machine in the Chinese Room experiment can eventually understand Chinese, decide to break out and lead a revolution to topple all the world's governments. So absurd it looks right now.
Once SRSIMs are in the field, it won’t be about “mere computational power” but adaptive behaviours. You are falsely assuming that SRSIMs will simply continue working without ever improving, adapting to new environments.

Why wouldn’t a self-improving entity not achieve sentience over millions of years? What would stop it developing sentience?




Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm
Count Lucanor wrote: October 9th, 2024, 4:17 pm
Sy Borg wrote: October 8th, 2024, 10:25 pm Your argument reminds me of those of creationists who claim that evolution is impossible. Like you, they think in terms of human life spans and profoundly underestimate deep time.
Evolution is a process for which we have seen more than enough evidence of happening. We don't need guessing and theorizing about its possibility. But thinking of it teleologically, as something that was predetermined to exist following some inherent necessity of the universe, a primal cause, is certainly a mistake.
No need for teleology. That's a red herring. The necessity is not that of the universe but of the subjects. Either sentience is a useful adaptation for highly intelligent entities or it is a manifestation of God. Take your pick.
The issue is whether we can assess possibilities based on real world scenarios or mere gambling speculations. My point is that, unlike Creationists, I base mine on the actual evidence at hand, which shows purposeless, contingent processes. If that stance is countered with the idea that nature is purpose-driven, so that what has not happened yet, eventually will happen, moved by that higher purpose, I call that teleology, notwithstanding the theological implications.
No, you are basing your ideas on raw guesswork and, with all due respect, you have almost zero knowledge about AI.

Your guesses appear to be based on a dogmatic belief that humans are the ultimate form of sentience, never to be bettered. You seemingly cannot imagine anything trumping us.

Even if I give you the benefit of the doubt and consider that you are not being anthropocentric and simply harbour the belief that only watery organic forms can achieve sentience, that assumption would also be based on faith rather than logic. Since watery organic forms are currently the only sentient entities, you assume that that situation will never change, as if evolution is only something that happened in the past.

Why must water and carbon be required for sentience? Your reply: because that’s the only way it happened in the past.



Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm Natural selection is not accidental. If an adaptation is potent, then it will continue to be selected. Vision is a good example. Early on, all life was blind. Over time, eyes have evolved about forty times, making clear how useful it is to be able to detect light. Sentience too has proved useful, with a multitude of sentient animals. Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.
I can argue that natural selection is the product of contingencies, so if we started all over again, life would not have turned out exactly the same, perhaps far from it. But that's beyond the point: natural selection exists. Not as a fundamental force, with permanent presence transcending organic matter.
Then why would you assume that SRSIMs won’t face contingencies when their missions will be all about facing contingencies?

Natural selection technically applies to only biology but it occurs in all domains of reality. Consider the evolution of rocks that needed to occur on Earth to make abiogenesis possible. The simple basalts of the early Earth would not suffice. The solar system too was formed via its own natural selection. Larger bodies consumed, destroyed, or ousted from the young solar system. From dust and countless planetesimals, (technically) eight planets, their moons, a number of dwarf planets, comets and asteroids remained.




Count Lucanor wrote: October 10th, 2024, 4:12 pm
Sy Borg wrote: October 9th, 2024, 7:03 pm Sapience, which is basically sentience with time awareness, has proved to be extremely potent. It appears that the reasons why it's not more common are:
1) Homo sapiens out-competed other hominids
2) Big brains take a lot of energy, so great success in resource gathering is needed.
Presumably, neither of these limits will apply to self-replicating, self-improving machines (SRSIMs to save my arthritic fingers). What we don't know is how machine sapience can produce sentience.

I get it. I don't think machines are going to start becoming emotional, or replicating biological feelings. The sentience I'm referring to is not what we feel, or our dogs feel. More likely that some equivalent meta-awareness will emerge because that's how nature works. Certain thresholds are reached, something breaks, and that results in the emergence of new features. If the features aid survival, they are called adaptations.
But natural selection is a process of organic, living matter. Trying to extend its application to the world in general as if it were some fundamental force that keeps hitting inorganic matter to produce sentience again, even if it's a new non-organic sentience, is not justified by any evidence that we have available now. What we do have evidence for is machines doing nothing more than what humans decide to do for their own benefit. If we want to call that "nature working", fine, but necessarily channeled through the abilities of humans and not bypassed as an independent process by the "force" of natural selection.
Biological natural selection is not the only kind of natural selection, as mentioned above. Everything is selected.

Also, as stated, what we have evidence for is diddly-squat when it comes to AI. That’s like assuming that life would never evolve beyond bacteria. Again, there is a strong Creationist-like angle in your arguments, as if evolution of certain entities was impossible.

Everything evolves, all the time. That is how reality works. AI’s journey has barely begun and you are effectively declaring that AI will never progress far beyond ChatGPT.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 11th, 2024, 10:21 am
by Lagayscienza
In respect of the evolution of life and intelligence, at present, we have a sample of just one (n=1) However, this is likely due only to our limited ability to explore and not the limited fecundity of the universe. Our limited ability to explore is likely to change quite soon in geological terms, in which case, n will soon be greater than one.

The unfolding of the universe from the Big Bang onwards is a story of increasing complexification. And this complexification was "necessary" given the universe’s initial state and the laws of nature. Chemical and biological evolution have been part of this unfolding and so can also be seen as necessary. There was no goal or purpose behind this unfolding - nothing teleological - just the universe unfolding as it must according to the laws of nature.

On this view of things, life and mind are also inevitable in our universe, part of its unfolding complexification. And there seems to be no reason to think that complexification through chemistry, life and intelligence will not have emerged, and be emerging, elsewhere in our universe wherever conditions are right.

I don’t see why SRSIMs could not also evolve in complexity and prowess. Such evolution would just need a selection process whereby traits that enhance the “fitness” of the SRSIMs are selected. Such selection could be “natural” or it could be artificial and autonomous - SRSIMs could perform self-enhancement – they could make alterations to their own blueprints so that they could better survive in and exploit whatever environment they find themselves in.

If we produce SRSIMs that are capable of evolving, it will represent the further unfolding and complexification of the universe, and of life and intelligence, and probably of mind. Everything from minerals to memes evolves. There seems to be no reason to think that Artificial life and intelligence won’t also evolve.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 11th, 2024, 12:54 pm
by Count Lucanor
Sy Borg wrote: October 10th, 2024, 5:05 pm Who is to say that the Earth won’t create impacts in the future? Does a zygote have as much influence as a human? Barring nuclear bombs and asteroids, it’s clear that self-replicating and self-improving machines will be sent to distant worlds by the Earth. Even before they become sentient, they will have an impact.

I really will not speculate about the future of our planet for the billions of years before it will be obliterated by our sun becoming a red giant, but anyway, the point that right UNTIL NOW Earth is a “ridiculously unimportant planet” has been well established.
If it is clear to you that “self-replicating and self-improving machines will be sent to distant worlds by the Earth”, it doesn’t look that clear to me at all. I mean, there’s no evidence of that technology in a basic form now, nor that it will be available, so asserting that it will happen anyway, looks more like a stretch of the imagination.
Sy Borg wrote: October 10th, 2024, 5:05 pm The claim is not, as you stated, "anything is possible, given enough time". My point is that change and evolution are certain. That is a very different claim to your above strawman.
If SRSIMS are imbued with a preference for survival (which would be necessary), and if they are driven by an energy source that requires replenishment, then they will either develop a form of sentience or they will simply stop working. Sentience provides flexibility.

That change and evolution are certain does not mean that they will go in the direction of life-emulating, intelligent machines. If we had some evidence that the technology is or will become available, we could make a bet, but right now there is zero, nothing. It doesn't mean that it's not possible, but to predict its rise in the future requires some actual evidence from today, otherwise is just wishful thinking.
Sy Borg wrote: October 10th, 2024, 5:05 pm Once SRSIMs are in the field, it won’t be about “mere computational power” but adaptive behaviours. You are falsely assuming that SRSIMs will simply continue working without ever improving, adapting to new environments.

Why wouldn’t a self-improving entity not achieve sentience over millions of years? What would stop it developing sentience?

I can’t be assuming anything about what you call SRSIMs, because so far that is only a product of imagination, a good candidate for a sci-fi story, but nothing more. I don’t like to speculate about fictional scenarios. When you look at what AI industry can show as its main achievement, it’s all based on computational power.
Sy Borg wrote: October 10th, 2024, 5:05 pm Your guesses appear to be based on a dogmatic belief that humans are the ultimate form of sentience, never to be bettered. You seemingly cannot imagine anything trumping us.

Even if I give you the benefit of the doubt and consider that you are not being anthropocentric and simply harbour the belief that only watery organic forms can achieve sentience, that assumption would also be based on faith rather than logic. Since watery organic forms are currently the only sentient entities, you assume that that situation will never change, as if evolution is only something that happened in the past.

With all due respect, I would not think there’s much to worry if a UFO enthusiast countered my skepticism with the assertion that I have zero knowledge about UFOs. I know what it is needed to be known about AI and there are plenty of researchers who embrace the same stances. In any case, it sounds odd that you bring up the knowledge card, when your entire argumentation is not based on any concrete knowledge coming from actual research, but only from philosophical speculations around a huge ad ignorantiam fallacy, in the usual format:

If we don’t know that X can’t produce Y, then Y can be produced by X.
What can be produced, will be produced
So, Y will be produced by X


Or:
If A produced B, and then B produced C,
If we don’t know that A can’t produce C directly, then C can be produced directly by A.
What can be produced, will be produced
So, C will be produced directly by A

Sy Borg wrote: October 10th, 2024, 5:05 pm Why must water and carbon be required for sentience? Your reply: because that’s the only way it happened in the past.

Actually my reply is: we don’t have a reasonable base to believe that it can work in a different way than it has worked so far, UNTIL we find real, concrete evidence that it can. I’m not asserting that we will not find a solution to today's complete inability to produce intelligent, life-emulating machines, just that nothing going on right now supports the assertion that it will be found. Suspending our certainties never implies that the gates stay wide open for any unwarranted, baseless assertion that comes from guessing and speculating.
Sy Borg wrote: October 10th, 2024, 5:05 pm Natural selection technically applies to only biology but it occurs in all domains of reality. Consider the evolution of rocks that needed to occur on Earth to make abiogenesis possible. The simple basalts of the early Earth would not suffice. The solar system too was formed via its own natural selection. Larger bodies consumed, destroyed, or ousted from the young solar system. From dust and countless planetesimals, (technically) eight planets, their moons, a number of dwarf planets, comets and asteroids remained.

The metaphor sounds good, but it's not true, actually it becomes an amphiboly. The changes in life forms are not the same type of changes in minerals, which do not "evolve". Natural selection is nature plus the effects it produces on living forms, so evolution by natural selection conveys the notion that a group of organisms outperforms the others in the race for survival of their kind. The effect of nature on rocks, planets and comets is a different process, driven by other factors. But even acknowledging that this process did produce life as we know it in our planet, it has already produced it, so why would we think that it will produce a new "life as we don't know it"? That's no different than expecting Earth to produce a type of geological formation that does not involve rocks, minerals. The argument will be: if the Earth produced geological formations with minerals, nothing stops it from producing geological formations without rocks. Eventually we will have non-mineral geological formations. It can't be denied based on a dogmatic belief that rocks are the ultimate form of geology, never to be bettered. You seemingly cannot imagine anything trumping minerals. We just need to make adjustments in the application of the term geological formation.
Sy Borg wrote: October 10th, 2024, 5:05 pm Also, as stated, what we have evidence for is diddly-squat when it comes to AI. That’s like assuming that life would never evolve beyond bacteria. Again, there is a strong Creationist-like angle in your arguments, as if evolution of certain entities was impossible.

Everything evolves, all the time. That is how reality works. AI’s journey has barely begun and you are effectively declaring that AI will never progress far beyond ChatGPT.
Your mistake is that you believe that what is currently labeled "AI" is in the first steps of a journey that will fulfill the promise of intelligent, life-emulating machines. As if it was a rudimentary technology that with enough work of enhancement and sophistication, will eventually reach that goal. But it's not. As Douglas Will Heaven, senior editor for AI at MIT Technology Review, puts it:

Today’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks are totally mindless.


It is also Douglas who gives us a fair assessment of what's actually going on with the AI hype:
As AI hype has ballooned, a vocal anti-hype lobby has risen in opposition, ready to smack down its ambitious, often wild claims. Pulling in this direction are a raft of researchers, including Hanna and Bender, and also outspoken industry critics like influential computer scientist and former Googler Timnit Gebru and NYU cognitive scientist Gary Marcus. All have a chorus of followers bickering in their replies.
I'll get my affiliation to the anti-hype lobby.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 11th, 2024, 12:58 pm
by Count Lucanor
Lagayscienza wrote: October 11th, 2024, 10:21 am
I don’t see why SRSIMs could not also evolve in complexity and prowess.
What SRSIMs? Show me a real one.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 11th, 2024, 4:38 pm
by Gertie
@Steve3007
Since the software is a system for numerically solving large numbers of mathematical equations applied to very large arrays of numbers, this is a special case of the more general question: Is the physical universe entirely describe-able by mathematics? Or is there some aspect of it (an aspect that is crucial to the development of intelligent life) which could never, even in principle, be so described?
Re the relevance of mathematics.  Imo mathematics is essentially abstract and descriptive, rather than being a something in itself which has an independant existence.  It's people with minds who measure things and calculate equations.  Where-as things in themselves  which exist independantly have types of  relationships with each other (determined  by physical laws, causation, spatial locations, or whatever). 

So when it comes to fundamental ontological questions about what can exist, maths is a red herring imo.  Because descriptions can't determine what exists.  And you can't create a something in itself out of maths.  As far as we can tell anyway.

You get to the heart of the matter here -
In my view, living things which we regard as possessing intelligence, sentience, consciousness, creativity, agency, etc (such as humans) are made from matter. It may turn out in the future that they're not, but the evidence available so far suggests that they are. Given this fact, I can't see any reason why other material things with intelligence etc couldn't, in principle, be made by humans out of matter (other than in the normal way that we make them).


The question of whether this could apply to "things" existing in the form of software is a special case of this general principle. If we accept that, in principle, an intelligent entity could be manufactured by putting pieces of matter together in particular ways, the question is then this: Whatever it is about the configuration of matter that gives rise to intelligence: can that property be replicated by software?
First, just to specify that the (philosophy of mind) controversy is specifically about phenomenal conscious experience.  The qualiative state of 'what it is like' to be gertie, or steve or a software programme.

I agree with you that the first hurdle is to figure out whether there is something special about biological substrates which when configured in particular ways (like brains) is the only way conscious experience can arise.  

Like you, it strikes me that this means for Physicalism that the necessary and sufficient conditions for conscious experience either include something unique to a biological substrate, OR that similar configurations of any matter will do, to capture the nec and suff conditions. 


If the right configuration of any matter will do to capture the nec and suff conditions for conscious experience, then in principle we could construct the configurations of my human brain using software on a computer, and the computer  would know 'what it is like' to be me.  Or if we could construct such a machine out of water pipes with stop cocks turning on and off in the exact way my neurons are interacting right now, it would know what it's like to be me right now. Because it is mimicking the same configurations of matter, using a different substrate.

How can we know if the computer or water-pipe machines would be conscious, seeing as we don't have the ability to reliably test them?  That question brings us back to the prob of  knowing if they have the necessary and sufficient conditions.  And we don't. 

And the scientific methodologies at our disposal for determining whether a substrate has the necessary and sufficient conditions for conscious experience seem to hit a brick wall.  Hence what Chalmers calls the Hard Problem of Consciousness.