Page 10 of 31

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 20th, 2024, 1:48 pm
by Count Lucanor
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am ... believe that most if not all talk about AI in this forum and mainstream media is ultimately nourished by the singularity hypothesis, which goes as follows:

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. (Wikipedia)
But, Count Lucanor, what if the singularity hypothesis is not the hypothesis that is argued for? I want to argue only that there is no reason to believe that sentience and intelligence can only be housed in biological organisms. The so called “singularity” might be possible – I’m unsure about that, but it is not what I argue for.
Surely there are many things being thrown around, but I think that ultimately they all relate to the core principles of AI founded by Turing, the ideas of Von Neumann and Good’s intelligent explosion. There’s a narrative, an ideology built around these ideas, which dominates the field of computer technology even if the participants are not fully aware of its origin or all its developments. Take for example the idea of “intelligent explosion” (reminiscent of the Cambrian explosion). It’s already loaded with the assumption of an emergent intelligent life breaking out on its own as a result of algorithms becoming more complex. Not only full Turing ideology behind the curtains, but the naturalization of human endeavors, so that they operate as independent, natural, spontaneous forces, The discussion obviously follows the path that you decide on the issue of whether intelligence is only housed in biological organisms or not.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 amMy charges against this hypothesis are:
1) What is called artificial intelligence rests on the assumption that minds are biological computers, so one should be able to recreate minds in highly sophisticated computers, but these assumptions are wrong. It's been a long debate since Turing, but I'm confident where I stand. There's a direct relationship between proponents of AI-as-real intelligence and the singularity hypothesis.
What prevents you from seeing brains as biological computers? You say that you are confident that brains are not biological computers. What gives you this confidence? Could you explain why you believe that brains cannot be made of inorganic materials?
Brains came first and computers much later, from people with brains. Living agents came first and technology later, from living agents. The correct mind setup is: if you’re going to argue that the brain is a computer and that agency is a type of technology, you have the burden of proof, and to make your case, you have to provide the theoretical models and the empirical evidence that supports it. Now, I know that such attempts are out there to discuss, but in the general public there’s seem to be an attitude of “let’s just believe what the tech lords say to us and then just reduce our arguments to: why not?”.

Anyway, the computational model has been also widely criticized. It reduces the mind to syntactical operations, the base of algorithms and programming language. It has been shown that such operations don’t carry semantic content, meaning, as this implies a sort of feeling of the world only found in organisms. Today’s most sophisticated software, such as GenAI and LLM have been shown not to have any of this. It’s no different than a pocket calculator not knowing anything about math.
Lagayscienza wrote: October 19th, 2024, 2:36 am
If it were possible that structures made from inorganic materials could house brain-like processes, what would prevent you from entertaining the idea that minds could emerge from these brain-like structures?
I’m skeptical and just playing cautious until some evidence arrives.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am2) Machines are lifeless, non-sentient. The assumption from proponents of AI-as-real-intelligence (also the singularity hypothesis) is that the more sophisticated the computers, the more "intelligent" they get, the closer to becoming life-emulating, sentient entities. The conceptual base is that life and sentience are emergent properties of intelligence. I say this is nonsense.
You say above that the conceptual base of AI proponents is that “life and sentience are emergent properties of intelligence”. But that is Idealism and not my assumption. Rather, I think sentence and intelligence have been emergent properties of life, but that it’s hard to see why sentience and intelligence must only be associated with the biological processes of organic life?
I’ll buy it that you’re not assuming idealistic positions, but can’t say about the rest. The pernicious influence of Idealism is all over the place. For now, sentience and intelligence ARE associated with the biological processes of organic life, if something changes, we would be able to see the evidence. As a purely theoretical model, it should leave the stage of philosophical, idealist-driven, speculation.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am3) Proponents of AI-as-real-intelligence (also the singularity hypothesis) believe that Generative AI and LLM (Large Language Models) are the holy grail of human-like computer intelligence, getting us closer to machines becoming life-emulating, sentient entities. Because this is not real intelligence, nor real life, nor real sentience, I say this is nonsense. It has been demonstrated that these models still can't think, reason, nor have interests, etc. They cannot have interests because they don't have any "feeling" apparatus.
I don’t believe the current crop of LLMs are sentient, or that they have interests. However, they certainly have abilities we associate with intelligence. These abilities, and our understanding of neural networks, seem to me like a humble start on the road to eventually building brain-like structures that perform similarly to organic brains.
However, the computational model of mind and intelligence is at its base. That “intelligence” is not real intelligence, just a simulation of intelligence, just the same way a parrot can emulate human speech.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am4) Technological growth is the result of human action and the nature of technology itself is instrumentality, it is a tool invented and used by humans. Its growth is not a "natural growth" outside of human society. It is very unlikely that spontaneously, without direct human intervention, the products of human technology become uncontrollable by ceasing to be instrumental, becoming agents on their own.
I agree that at present it is unlikely. But, down the road, is it impossible in prinicple?
With the current models, it is impossible by principle. By definition, technology is instrumental to humans. I have explained in detail in previous posts in this thread why the Singularity scenario is very unlikely, as it involves social action from a race of machines.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am5) Even in the highly unlikely scenario that humans managed to create life-emulating intelligent machines, to be agents on their own, pursuing their own interests, it would imply that they are constituted as a new race or class of entities with the power of social action. If such sci-fi scenario was possible, indeed it would bring unforeseeable consequences for human civilization, as the singularity hypothesis predicts, but that new history would be entirely undetermined and contingent, just as human history is right now.
Right. Their future would be undetermined and contingent. But does that make it impossible? We inhabit a deterministic universe in which contingent processes such as evolution by natural selection unfold. Why should we think that such processes are only possible for organisms like us? Why, in deep time, could evolution of some form not play a part in the development of autonomous, self-replicating machines that we build and send out to explore and colonize the galaxy?
I would like to see some evidence, not just “why not” speculations, without reference to the current state of our knowledge and capabilities. Would it be useful to talk about the possibility of teletransportation? I don’t think so.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Apologies for all the questions. It's just that I'm trying to better understand your position.
No problem whatsoever.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 21st, 2024, 5:25 am
by Sy Borg
The key to this issue, as in all things (but is frequently taken for granted) is time.

Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.

However, in wars 80% of people who die are civilians, and generally those are not the top brass. Thus, progress in AI development will continue (and it usually accelerates in warfare due to competitive pressures). It would take a truly world-ending event, like a massive comet impact, to destroy all industrial activity and stop AI progress.

It's hardly a stretch to imagine human civilisations persisting for thousands of years - a blink of the eye in evolutionary time, and less distant than ancient Egypt. In that time, AI will become unrecognisable. I suspect that AI will be beyond our imaginations even in a century's time, given that AI can be used to create and train better AI. There's an exponential aspect to its development.

The nature of intelligence itself is going to change, and perhaps even the concept of sapience.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 21st, 2024, 9:39 am
by Pattern-chaser
Pattern-chaser wrote: October 20th, 2024, 9:40 am So where do we go from here?
Gertie wrote: October 20th, 2024, 10:12 am Where we go - is try it and see what happens.
The possible effects on us of autonomous AI are significant. To just try it to see what happens is like releasing a deadly virus, to see if it will spread like the doom-sayers say it might. Or like sending GIs to roll around in the dust remaining at 'ground zero' after a nuclear explosion, to see what happened. Well we did it, and we saw. The GIs mostly died early (and avoidable) deaths from radiation-related disease.

I think caution and prudence are appropriate in such matters as this. AI could be the end of humanity. Not "will be" or "would be", but only "could be".


Gertie wrote: October 20th, 2024, 10:12 am Bearing in mind that if a computer can achieve agency, goals and perhaps a sense of wellbeing, that has both risks and welfare implications we ought to think through. And such considerations are not best left in the hands of tech corporations and billionaire owners.
With this I cannot disagree, especially not with your final sentence. 👍

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 21st, 2024, 9:45 am
by Pattern-chaser
Sy Borg wrote: October 21st, 2024, 5:25 am Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.
Are we all aware of how much power is being used, across the globe, to *power* AI, and the computers and servers that run it? An example:
If you were to fully turn Google’s search engine into something like ChatGPT, and everyone used it that way—so you would have nine billion chatbot interactions instead of nine billion regular searches per day—then the energy use of Google would spike. Google would need as much power as Ireland just to run its search engine.
Whatever else AI is, it is computer-intensive. It requires many fast computers and servers just to do what it does. And the resulting power consumption can make your eyes water...

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 21st, 2024, 2:45 pm
by The Beast
Obviously. Applying neck of the woods statistics to the whole forest might allow strong deviations. There is an overall 2.1 world birth index which corresponds to an increase in population (besides the increase in life expectancy). Some areas have higher population increases. Ex: Nigeria (a rich country) is the most populated African nation and… growing. In 1960 was forty-five million and presently it is 230 million humans with the poor Muslim population growing (presently 53.5%). Estimates are that by 2050 the population will be 360 million. Nigeria’s GDP per capita was $1,621 (in the latest tab). Avg monthly income $190-$355. The most densely populated nation is Macau 62182 people per square mile, however less than a million people total with about 1.5% yearly increase. Birth rate around 1.08. Life expectancy 83. GDP $78962 per capita and income is $66835 per capita. Sydney, Australia has a population density of around 20000 per square mile growing around 2% yearly GDP per capita is $86500 and income per capita is $82000.
My subjective opinion from the empirical data is 1) that the people of Sydney are smarter than the people of Macau. 2) Some countries with powerful armies (Rusia) are looking to greener pastures. Iran population birth rate is avg (2.1), but GDP per capita is around $5500 which is below the $13138 world’s income per capita. However, 99.4% are Muslim in Iran (compared to 1920’s 21% Muslim) in a local subjacent (local products and currency) economy. Examples of this economic model are present in other (non-Muslim) countries like for example Cuba ($7,252 GDP per capita and not benefiting of world trade) and no relevant electrical grid (water the eyes). 3) Sy Borg is more wrong than right since there is nothing new in the human saga. AI is mostly (corporations) deployed in participant countries of world trade benefiting such citizenship with increase understanding. “Understanding” is the original translation (from the Greek intel) of intelligence. A new divide… ( half right?).

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 21st, 2024, 8:40 pm
by Mo_reese
Pattern-chaser wrote: October 21st, 2024, 9:45 am
Sy Borg wrote: October 21st, 2024, 5:25 am Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.
Are we all aware of how much power is being used, across the globe, to *power* AI, and the computers and servers that run it? An example:
If you were to fully turn Google’s search engine into something like ChatGPT, and everyone used it that way—so you would have nine billion chatbot interactions instead of nine billion regular searches per day—then the energy use of Google would spike. Google would need as much power as Ireland just to run its search engine.
Whatever else AI is, it is computer-intensive. It requires many fast computers and servers just to do what it does. And the resulting power consumption can make your eyes water...
And the power consumption need is apt to increase geometrically, possibly to the point of making a significant impact on our planet's resources.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 21st, 2024, 10:12 pm
by Lagayascienza
Sy Borg wrote: October 21st, 2024, 5:25 am The key to this issue, as in all things (but is frequently taken for granted) is time.

Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.

However, in wars 80% of people who die are civilians, and generally those are not the top brass. Thus, progress in AI development will continue (and it usually accelerates in warfare due to competitive pressures). It would take a truly world-ending event, like a massive comet impact, to destroy all industrial activity and stop AI progress.

It's hardly a stretch to imagine human civilisations persisting for thousands of years - a blink of the eye in evolutionary time, and less distant than ancient Egypt. In that time, AI will become unrecognisable. I suspect that AI will be beyond our imaginations even in a century's time, given that AI can be used to create and train better AI. There's an exponential aspect to its development.

The nature of intelligence itself is going to change, and perhaps even the concept of sapience.
I'd say this is how things are likely to play out. Yes, there will continue to be struggles, disaster and death, and as always, it will be the poor and powerless who do most of the suffering. But failing a complete trashing of the joint, humans are likely to survive and so will AI. If this is so, then, over deep time, the prospects for AI are probably unimaginable by us today. As another tool, AI promises a vast increase in our power. It could conceivably become an existential threat to us at some point, but I don't think that needs to so. We are certainly nowhere near that point at present. But are we playing with fire? Well, humans, in their curiosity and search for new tools, have always liked to play with fire. Playing with fire got us burnt from time to time, but it also resulted in humans in their modern form. There is no reason to think humans will stop playing with fire literally or metaphorically. And no reason to think that the development of AI will hit a wall.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 22nd, 2024, 3:49 am
by Sy Borg
Lagayscienza wrote: October 21st, 2024, 10:12 pm But are we playing with fire? Well, humans, in their curiosity and search for new tools, have always liked to play with fire. Playing with fire got us burnt from time to time, but it also resulted in humans in their modern form. There is no reason to think humans will stop playing with fire literally or metaphorically. And no reason to think that the development of AI will hit a wall.
Yep, we're playing with fire, and AI doesn't need to be sentient to create many problems. However, let's say the US decided, "Nope. It's too dangerous. We're stopping". China would say "Yippee!" (or rather, Yí-Píng!) and then ram home their competitive advantage to become globally dominant with relative ease.

The dynamics are not wildly unlike the nuclear arms race - an ultra-high states game of chicken, a game of chance, possibly winner-takes-all.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 22nd, 2024, 6:03 am
by Lagayascienza
Yup. And then from among the ranks of those left standing in a winner takes all game, new competitive struggles and new power blocks will emerge. Nothing much changes. That's the sort of species we are.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 22nd, 2024, 7:26 am
by Sy Borg
All highly intelligent species behave appallingly. Less intelligent ones also behave appallingly, just that they don't know it. Life itself is brutal, the ouroboros etc etc. I used to want to fight against it all. Now I accept the reality. I'd rather accept the world, despite its flaws, than continue to love an ideal that cannot be realised and end up being disillusioned, and seeing that love turn to hate.

Whatever, I don't think AI will wipe humanity out, rather it will more likely help at least a percentage of people better survive.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 22nd, 2024, 7:46 am
by Pattern-chaser
Pattern-chaser wrote: October 21st, 2024, 9:45 am Whatever else AI is, it is computer-intensive. It requires many fast computers and servers just to do what it does. And the resulting power consumption can make your eyes water...
Mo_reese wrote: October 21st, 2024, 8:40 pm And the power consumption need is apt to increase geometrically, possibly to the point of making a significant impact on our planet's resources.
I have seen estimates that the internet, as a whole, consumes about 15% of all human energy use. AI is a rapidly-increasing part of that.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 22nd, 2024, 11:31 pm
by Lagayascienza
Sy Borg wrote: October 22nd, 2024, 7:26 am All highly intelligent species behave appallingly. Less intelligent ones also behave appallingly, just that they don't know it. Life itself is brutal, the ouroboros etc etc. I used to want to fight against it all. Now I accept the reality. I'd rather accept the world, despite its flaws, than continue to love an ideal that cannot be realised and end up being disillusioned, and seeing that love turn to hate.

Whatever, I don't think AI will wipe humanity out, rather it will more likely help at least a percentage of people better survive.
True. What cannot be changed must be accepted. People underestimate just how much of human nature and behavior is determined. Humans are part of nature, a natural phenomenon like the rest of life, an outgrowth of the planet. If we want to change the way we interact with each other and the rest of life, then we would need to change ourselves. We would need to change the very determinants of our behavior. We could conceivably change our brains in such a way that they did not produce too much competitive, violent, destructive behavior. And we could try to free ourselves from the ouroboros by becoming technological autotrophs. But none of this will be happening any time soon, if at all. Unless/until it does, the best we can do is try consciously to give more voice to the better angels of our nature, to our cooperative, caring side. That side, too, is part of nature which we must accept. Hopefully, we can foster it at the expense of our darker side. But progress in that regard has been, at best, halting and piecemeal.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 23rd, 2024, 3:37 pm
by Count Lucanor
Lagayscienza wrote: October 22nd, 2024, 11:31 pm
People underestimate just how much of human nature and behavior is determined. Humans are part of nature, a natural phenomenon like the rest of life, an outgrowth of the planet. If we want to change the way we interact with each other and the rest of life, then we would need to change ourselves. We would need to change the very determinants of our behavior. We could conceivably change our brains in such a way that they did not produce too much competitive, violent, destructive behavior. And we could try to free ourselves from the ouroboros by becoming technological autotrophs. But none of this will be happening any time soon, if at all. Unless/until it does, the best we can do is try consciously to give more voice to the better angels of our nature, to our cooperative, caring side. That side, too, is part of nature which we must accept. Hopefully, we can foster it at the expense of our darker side. But progress in that regard has been, at best, halting and piecemeal.
Surely, absolutely everything is natural in the sense that nature is the universe, unless one thought that there is or there might be another supernatural reality beyond the universe as we know it, which I believe is not your case. However, is the International Space Station natural? Is it Musk’s Cybercab? Calling them “natural” doesn’t say much about them, as these things, among the many produced by human culture, are essentially different than any other product of any other species in nature, being precisely that the difference that allows us to use a term invented for the arte factum, for the things made by human ingenuity: “artificial”. The label is put to things such as intelligence, fibers, diamonds or rain, to distinguish them from those not produced by humans. Artificial intelligence, artificial fibers, artificial diamonds or artificial rain, they all point to man-made things, as opposed to natural intelligence, natural fibers, natural diamonds and plain old natural rain.
The existence of all these “artificial” things says already something about the determinants of human behavior. While our brains have not changed anatomically in any significant way for millennia, we have certainly changed ourselves from our hunter-gatherers ancestors, in other words, we have been able to use those determinants of behavior for transforming our environment and our own societies, so that, unlike the rest of nature, our destiny is not predetermined by the innate faculties, actually they allow us to do the opposite. There’s no need to wait for a new artifact to modify our brains (minds) and behavior, we have already being doing it since there was a human species: arrows, fire, agriculture, art, script, books, TV and, of course, computers. We have been making better and better computers, and with that we will continue modifying our minds and behavior, but how far can we go with that? Again, our destiny is not predetermined, so we don’t know. We do know, however, that we can make a very strong case for this thing called “artificial intelligence” not being real intelligence at all, in the sense that we expect a natural diamond sharing key properties with an artificial diamond, while still being called a diamond, so the current state of our technology, that which we use for modifying our minds and behavior, is not there yet, not in that field.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 23rd, 2024, 5:48 pm
by Sy Borg
Yes, the ISS is natural. It is a structure created by the planet Earth. A cocoon, like the ISS, was created by a life form, in this case caterpillars. It could not be spontaneously produced by the Earth without the conduit of a caterpillar, likewise the ISS cannot be created by the earth without the conduit of humans.

AI is just one more thing that the Earth is doing.

People have an anthropocentric notion that humans and their creations are outside agents that exist on planet Earth, almost as if we are aliens (or divine), as opposed to being structures of the planet like mountains, streams, trees and cocoons.

Anthropocentrism is a practical, and the terms "artificial" and "natural" are ways of delineating that which is created by humans or not, but it is ontologically incorrect. AI is simply another expression of proto-intelligence by the planet.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 23rd, 2024, 6:11 pm
by Sculptor1
I've been quite impressed with CHATGPT, though I has been shown to have some really bad simple errors lie counting the number of multiple letters in words..

But here is a question I asked today.

In the us electoral collage ., the number of electors bears an aproximate correlation to the population of the state. But given that some states are under represented whilst other over represeted please give a simple comparison such as 1 vote in the worst effected state is worth x times less than the most most advantages. and give the names of the two most extreme states

It ignored the spelling problem, and gave a comprehensive and correct answer.

The actual answer is at the end but it also showed its workings.

In the U.S. Electoral College system, the number of electors per state is roughly proportional to the population, but because each state automatically gets two electors for its Senate representation (in addition to electors based on the number of House seats, which are population-based), smaller states are somewhat overrepresented relative to larger ones.

Here’s a simplified comparison:

Wyoming: The smallest state by population, Wyoming has 3 electoral votes for about 580,000 people (as of 2020), meaning roughly 1 electoral vote per 193,000 people.
California: The largest state by population, California has 54 electoral votes for about 39 million people, meaning roughly 1 electoral vote per 720,000 people.
Comparison:
1 vote in Wyoming is worth about 3.73 times more than 1 vote in California.

Thus, Wyoming is the most overrepresented state, while California is the most underrepresented state in terms of the Electoral College.


I was impressed, but this is not intelligence, this is a specialised language model