Lagayascienza wrote: ↑December 4th, 2024, 11:13 pm
Count, intelligence is what we have. How do we measure our own intelligence and the intelligence of animals lower down on the intelligence spectrum? We know the answer to that question.
Apparently, we don’t know, since we have gone through a bunch of definitions and will not settle for any. Just when it seems you have made your mind about what we are pointing to when referring to intelligence, you begin describing it in a different, contradictory way. Sometimes it looks as if intelligence is what only a human neocortex will produce, sometimes is not. Sometimes it looks as if it is what is found in all neural networks, sometimes only some of them. I have pointed out to specific contradictory claims, yet no response to those. Maybe the right answer is found in some of those books you say you’ve been reading, and I’ve been open to read from you how they can contribute to this discussion, but if you can’t find it yourself, I’ll just keep asking. I made it clear at the start of this part of the thread in November that there is clearly an ambiguity in the use of the term intelligence, even in the most technical or scientific circles, which also shows in Hawkins’ book. You need to know what you’re working with in order to advance and produce any knowledge, even if you merely stick to an operational definition for the purpose of controlling your results to remain consistent, coherent. So I offered a bunch of possible definitions of what is it that we are talking about when referring to intelligence, but here we are. Not much progress.
Lagayascienza wrote: ↑December 4th, 2024, 11:13 pm
I ask, why can’t we measure intelligence in non-organic neural networks in the same way that we measure it in animals?
How about knowing what you’re measuring, in the first place.
Lagayascienza wrote: ↑December 4th, 2024, 11:13 pm
I get the feeling we are getting bogged down in details. The bottom line for me is that organic neural networks are physical systems which in animals produce various levels of consciousness and intelligence. Physical systems can be understood. What can be understood about organic neural networks can eventually be emulated in a non-organic substrate. You seem to believe that this may not be the case but it is not clear to me why you believe this.
It is odd that you acknowledge that we need to understand the physical systems, but when I go down to the details about those physical systems to see how they compare to each other, you dismiss the issue. I have pointed out that there s not an undifferentiated, homogenous, continuous neural network in living beings, but a complex system organized in different anatomical parts with different functions, including hundreds of different types of neurons, and they operate several organic systems within an organism, which also differ between the whole range of organic forms, from insects to humans. I propose that this is very much unlike the computational neural networks, so there’s something there to talk about and try to understand in physical systems. At least it would help us test your hypothesis that the biological neural networks can be replicated artificially, independently of the fact that we have not reached a definition of what intelligence is.
Lagayascienza wrote: ↑December 4th, 2024, 11:13 pm
I assume that you are not a neuroscientist or computer scientist but an interested lay-person like me. You keep asking more questions, or reframing old questions. But you never give an indication of what you think is going on in organic neural networks to produce consciousness and intelligence. As a lay-person, I must leave the details of what happens in organic and non-organic neural networks to neuroscientists and computer scientists.
As Hawkins clearly stated, technically speaking, how exactly consciousness and intelligence (assuming we were absolutely clear about what these terms actually refer to) work, is a complete mystery. All the rest about the anatomy, biochemistry, organic functions, etc., and how they relate to observable behaviors, we know quite a lot, but when it comes to understanding the qualitative phenomenon of conscious experience itself, all we are left to talk about is theoretical frameworks. And that’s why we are discussing this in a philosophical forum, focusing primarily on the possibilities of a specific theoretical framework that posited the emergence of consciousness or intelligence by means of computational, algorithmic systems. We moved then to assess other theoretical frameworks that look into the types of systems that actually produce consciousness or intelligence (biological systems). We are here now trying to identify which biological systems do carry consciousness or intelligence. Those “details” seem important. One thing I’m also trying to highlight, as in a previous statement, is the ambiguity and loose application of key concepts across several fields and even among researchers. Intelligence, cognition, sentience, agency, do not mean the same in every instance.
I also gave more than an indication of what I think is going on in general: cognition entails the regulation of the whole body of an organism, so you cannot simply disassociate the mind processes from the other organic processes which ultimately explain the behavior of that organism. The implication is that in order to replicate artificially a system that houses a mind process, whatever it may be, you will need to replicate the whole organic process, which is not simply reducible to neurons connected to each other. Of course, you may decide that you don’t want that, but only to automate tasks and outperform biological systems. That’s fine, but it’s not AI, AGI, nor anything similar.
Lagayascienza wrote: ↑December 4th, 2024, 11:13 pm
A lot of the questions you pose about organic neural networks and how they do what they do, and how that could be emulated in artificial neural networks, are dealt with in the books I have referred to.
If you could read those books then we might be able to discuss them in detail, chapter by chapter, and arrive at the key issues we differ on, if any. For example, we could discuss the sections on intelligence and what is known about organic neural networks and the sections on artificial neural networks and AI. If you have any science-based books on the subject that you think I might benefit from, I will certainly read them. If we don’t do it that way, then I fear we will just keep circling the key issues but never arrive at an understanding of our differences, if indeed we have any substantial differences.
To me, this is not about winning an argument but about finding out what is known, and what we still need to find out, in order to build AGI.
Are you open to my suggestion?
I understand you wanted to get better instructed with brain literature and contribute to this discussion with perhaps new insights. I’ve been open to it, but I don’t think I should be conditioned to leave this forum to get acquainted with your references first, before engaging in a productive discussion. As a lay person myself, just as you are, I feel capable of dealing with the subject as it is proposed, right now. All references are welcomed, but also open to scrutiny, since often what is claimed to “be known”, is still open to study.