Lagayscienza wrote: ↑October 19th, 2024, 2:36 amSurely there are many things being thrown around, but I think that ultimately they all relate to the core principles of AI founded by Turing, the ideas of Von Neumann and Good’s intelligent explosion. There’s a narrative, an ideology built around these ideas, which dominates the field of computer technology even if the participants are not fully aware of its origin or all its developments. Take for example the idea of “intelligent explosion” (reminiscent of the Cambrian explosion). It’s already loaded with the assumption of an emergent intelligent life breaking out on its own as a result of algorithms becoming more complex. Not only full Turing ideology behind the curtains, but the naturalization of human endeavors, so that they operate as independent, natural, spontaneous forces, The discussion obviously follows the path that you decide on the issue of whether intelligence is only housed in biological organisms or not.Count Lucanor wrote: ↑October 18th, 2024, 11:28 am ... believe that most if not all talk about AI in this forum and mainstream media is ultimately nourished by the singularity hypothesis, which goes as follows:But, Count Lucanor, what if the singularity hypothesis is not the hypothesis that is argued for? I want to argue only that there is no reason to believe that sentience and intelligence can only be housed in biological organisms. The so called “singularity” might be possible – I’m unsure about that, but it is not what I argue for.
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. (Wikipedia)
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amBrains came first and computers much later, from people with brains. Living agents came first and technology later, from living agents. The correct mind setup is: if you’re going to argue that the brain is a computer and that agency is a type of technology, you have the burden of proof, and to make your case, you have to provide the theoretical models and the empirical evidence that supports it. Now, I know that such attempts are out there to discuss, but in the general public there’s seem to be an attitude of “let’s just believe what the tech lords say to us and then just reduce our arguments to: why not?”.Count Lucanor wrote: ↑October 18th, 2024, 11:28 amMy charges against this hypothesis are:What prevents you from seeing brains as biological computers? You say that you are confident that brains are not biological computers. What gives you this confidence? Could you explain why you believe that brains cannot be made of inorganic materials?
1) What is called artificial intelligence rests on the assumption that minds are biological computers, so one should be able to recreate minds in highly sophisticated computers, but these assumptions are wrong. It's been a long debate since Turing, but I'm confident where I stand. There's a direct relationship between proponents of AI-as-real intelligence and the singularity hypothesis.
Anyway, the computational model has been also widely criticized. It reduces the mind to syntactical operations, the base of algorithms and programming language. It has been shown that such operations don’t carry semantic content, meaning, as this implies a sort of feeling of the world only found in organisms. Today’s most sophisticated software, such as GenAI and LLM have been shown not to have any of this. It’s no different than a pocket calculator not knowing anything about math.
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amI’m skeptical and just playing cautious until some evidence arrives.
If it were possible that structures made from inorganic materials could house brain-like processes, what would prevent you from entertaining the idea that minds could emerge from these brain-like structures?
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amI’ll buy it that you’re not assuming idealistic positions, but can’t say about the rest. The pernicious influence of Idealism is all over the place. For now, sentience and intelligence ARE associated with the biological processes of organic life, if something changes, we would be able to see the evidence. As a purely theoretical model, it should leave the stage of philosophical, idealist-driven, speculation.Count Lucanor wrote: ↑October 18th, 2024, 11:28 am2) Machines are lifeless, non-sentient. The assumption from proponents of AI-as-real-intelligence (also the singularity hypothesis) is that the more sophisticated the computers, the more "intelligent" they get, the closer to becoming life-emulating, sentient entities. The conceptual base is that life and sentience are emergent properties of intelligence. I say this is nonsense.You say above that the conceptual base of AI proponents is that “life and sentience are emergent properties of intelligence”. But that is Idealism and not my assumption. Rather, I think sentence and intelligence have been emergent properties of life, but that it’s hard to see why sentience and intelligence must only be associated with the biological processes of organic life?
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amHowever, the computational model of mind and intelligence is at its base. That “intelligence” is not real intelligence, just a simulation of intelligence, just the same way a parrot can emulate human speech.Count Lucanor wrote: ↑October 18th, 2024, 11:28 am3) Proponents of AI-as-real-intelligence (also the singularity hypothesis) believe that Generative AI and LLM (Large Language Models) are the holy grail of human-like computer intelligence, getting us closer to machines becoming life-emulating, sentient entities. Because this is not real intelligence, nor real life, nor real sentience, I say this is nonsense. It has been demonstrated that these models still can't think, reason, nor have interests, etc. They cannot have interests because they don't have any "feeling" apparatus.I don’t believe the current crop of LLMs are sentient, or that they have interests. However, they certainly have abilities we associate with intelligence. These abilities, and our understanding of neural networks, seem to me like a humble start on the road to eventually building brain-like structures that perform similarly to organic brains.
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amWith the current models, it is impossible by principle. By definition, technology is instrumental to humans. I have explained in detail in previous posts in this thread why the Singularity scenario is very unlikely, as it involves social action from a race of machines.Count Lucanor wrote: ↑October 18th, 2024, 11:28 am4) Technological growth is the result of human action and the nature of technology itself is instrumentality, it is a tool invented and used by humans. Its growth is not a "natural growth" outside of human society. It is very unlikely that spontaneously, without direct human intervention, the products of human technology become uncontrollable by ceasing to be instrumental, becoming agents on their own.I agree that at present it is unlikely. But, down the road, is it impossible in prinicple?
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amI would like to see some evidence, not just “why not” speculations, without reference to the current state of our knowledge and capabilities. Would it be useful to talk about the possibility of teletransportation? I don’t think so.Count Lucanor wrote: ↑October 18th, 2024, 11:28 am5) Even in the highly unlikely scenario that humans managed to create life-emulating intelligent machines, to be agents on their own, pursuing their own interests, it would imply that they are constituted as a new race or class of entities with the power of social action. If such sci-fi scenario was possible, indeed it would bring unforeseeable consequences for human civilization, as the singularity hypothesis predicts, but that new history would be entirely undetermined and contingent, just as human history is right now.Right. Their future would be undetermined and contingent. But does that make it impossible? We inhabit a deterministic universe in which contingent processes such as evolution by natural selection unfold. Why should we think that such processes are only possible for organisms like us? Why, in deep time, could evolution of some form not play a part in the development of autonomous, self-replicating machines that we build and send out to explore and colonize the galaxy?
Lagayscienza wrote: ↑October 19th, 2024, 2:36 amNo problem whatsoever.
Apologies for all the questions. It's just that I'm trying to better understand your position.
― Marcus Tullius Cicero