Lagayascienza wrote: ↑December 17th, 2024, 1:22 am
Count Lucnor wrote:GenAI and LLMs have been hitting “the wall”, as was predicted by a few skeptics a couple of years ago. Computation is simply not the path to consciousness.
Computation is what biological neural networks do. And so do artificial neural networks.
This assertion is quite problematic. Mostly because it takes a term from a field and applies it, uncritically, to another. Originally, to compute meant simply to calculate numbers, using mathematical operations, which involve rules for manipulation of numerical symbols (including their position, their syntax). People who were given these tasks were called computers. Eventually these tasks were automated with machines, which came to be known as computers themselves. Computer science then developed and went on to include other syntactical operations based on mathematical structures of formal logic (it is a branch of mathematics). That’s what computing means: doing mathematical/logical operations using well-defined sets of rules for manipulation of symbols, sets of rules that are called algorithms. A slide rule and an abbacus are analog computers. The first automated computing machines used analog signals in electric, mechanical or hydraulic components, and later, machines using digital signals on integrated circuits were developed (the modern computer). Whatever the case, analog or digital, the physical architecture of a computer involves a set of devices that can be made to interact with each other, a system, so that the relative state of one element has an effect on the state of the others. Thus, you can program operations and automate the tasks you used to do manually before (not in your head, because you didn’t have a pre-wired mathematical syntax, you had to learn it). Bear in mind: in modern computing, the physics of the integrated circuits allows for more computational power, but the crux of the matter lies in the software, the coded instructions, not the hardware.
Now, the computational theory of mind states that this is exactly what living beings do when cognizing. What’s the proof? None. Where is the software? Nowhere. It’s a simple metaphor about states of a physical system (the biological one) and states of a virtual system running on a physical substrate. The best chance was to demonstrate that computers do cognize, and if so, then it would be plausible that the cognitive apparatus of living beings worked as computers, too, but of course, we now know that the most sophisticated computing device does not think, does not cognize. It is nothing but a highly sophisticated abbacus which manipulates symbols. A pocket calculator is not any smarter in mathematics than the slide rule it replaced.
Corals, hydra and anemones have a nervous system, composed of neurons, so they can be said to have a neural network. Are they “computing”, that is, performing syntactical operations with symbols following a set of rules? That is very unlikely and there’s no one saying that they are. It makes for a good metaphor, however, to say that the state of their biological system is comparable to the state of any other physical system that can perform automated operations.
Are the corals, hydra and anemones conscious? Intelligent? If one suggests that they are, on what basis? If they are not, why would you say that neurons are the necessary and sufficient conditions for consciousness or intelligence?
Lagayascienza wrote: ↑December 17th, 2024, 1:22 am
It will not be necessary to exactly reproduce human brains to build artificial human-level intelligence.
How about the neural network of corals and hydra? Or how about the brain of the fruit fly, will that suffice? I’m still unsure about whether you call intelligence only to what humans can produce or not and if that’s what the AI industry is focused on.
Lagayascienza wrote: ↑December 17th, 2024, 1:22 am
Two recent papers in the scientific journal, Nature, which discuss the current state of play are worth reading. Both of these are freely available online at Nature: They are:
Anil Ananthaswamy, How Close is AI to Human-Level Intelligence?, Nature, Vol636, 5 Dec 2024 and,
More Powerful AI is coming. Nature 636, 22-25; 2024
Your insistence that these articles are “the current state of play”, exemplifies the state of mind that sees a unified, scientifically objective field of cognitivism and AI research, supposedly advancing empirical knowledge. Unfortunately, it actually does not exist, everyone is into theoretical frameworks and models, with different schools trying to make their case. Surely, some of them try to rely on empirical research, but as with most technological endeavors, they arrive with some presumptions taken from their preconceived models. Given the confluence of interests in the field plus the general lack of understanding of the problem of consciousness, it becomes highly ideological and biased. When you see something hyped, you better double-check. That’s why Anne Traftom from MIT News pointed in 2022 to a study that “urges caution when comparing neural networks to the brain”. It confirms that “computing systems that appear to generate brain-like activity may be the result of researchers guiding them to a specific outcome”. The idea that artificial neural networks are modeled like the circuitry of the nervous system is still highly disputable.