Log In   or  Sign Up for Free
A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.
Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.
Steve3007 wrote: ↑October 1st, 2024, 5:48 am In my view, living things which we regard as possessing intelligence, sentience, consciousness, creativity, agency, etc (such as humans) are made from matter. It may turn out in the future that they're not, but the evidence available so far suggests that they are. Given this fact, I can't see any reason why other material things with intelligence etc couldn't, in principle, be made by humans out of matter (other than in the normal way that we make them).
The question of whether this could apply to "things" existing in the form of software is a special case of this general principle. If we accept that, in principle, an intelligent entity could be manufactured by putting pieces of matter together in particular ways, the question is then this: Whatever it is about the configuration of matter that gives rise to intelligence: can that property be replicated by software? Since the software is a system for numerically solving large numbers of mathematical equations applied to very large arrays of numbers, this is a special case of the more general question: Is the physical universe entirely describe-able by mathematics? Or is there some aspect of it (an aspect that is crucial to the development of intelligent life) which could never, even in principle, be so described?
amorphos_ii wrote:I reckon that all intelligence needs is information exchange, but fundamentally there is something different between that and when info is exchanged within the context of a subjective MIND being present to experience it. For me ‘mind’ is infinite and is one of the fundaments of reality, as so is ‘body’ [e.g. space is the first iteration and manifestation of body]. However information has to be able to act to its own purposes i.e. without mind [directly], and so in short, when there is a body coalesced with a mind – so to say, you get a subjective mind, which affects the relative body/info by virtue of it being the inner ‘eye’ – the observer and that which perceives.If this is a premise:
amorphos_ii wrote: ↑December 17th, 2023, 11:49 am Should AI be called something else other than ‘intelligence’ to be correct.So 'artificial intelligence' should be called paradoxical intelligence to be correct.
Empiricist-Bruno wrote: ↑November 28th, 2024, 6:07 pm I think AI is intelligent if you are willing to forget how it's being generated.No, it's not a paradox. It's a deliberate (and transparent) deception. (Current) AIs are implemented to look intelligent, not to be so.
If you want to take into account how "the intelligence" came about then, in my opinion, it's a paradox of intelligence. It can't be intelligence as it has no cognition and what it says only appears to be an opinion but it's not. You need a person to have an opinion don't you? So how can a robot hold an intelligent opinion? It does not and yet it does. Hence, it's a paradox.
Count Lucanor wrote:My apologies for not answering promptly, it has been a busy week.No problem, Count Lucanor. I’ve appreciated the time-out so that I could read up more in order to find out what I think about consciousness, AI and AGI. It’s fascinating but I’m still trying to get across it all.
Count Lucanor wrote:I’m also trying to get the grip of what are the implications of what you said. There are several issues I would like to address, more with an inquisitive mind than trying to settle the matter completely.Yes, that’s how I’m approaching the topic. It’s unlikely that we will settle anything here but we can look more closely at definitional issues so that we’re not talking past each other. From what I’ve been reading it seems that anything with any level of intelligence, from the least to the most intelligent, will have an neural network which can build a model of its world, a sensory array and some level of sentience.
Lagayascienza wrote: ↑December 2nd, 2024, 10:59 pmI wish you had addressed some of my concerns in my previous post, but OK, let's see what is still in dispute and what is not, and restart from there.Count Lucanor wrote:My apologies for not answering promptly, it has been a busy week.No problem, Count Lucanor. I’ve appreciated the time-out so that I could read up more in order to find out what I think about consciousness, AI and AGI. It’s fascinating but I’m still trying to get across it all.Count Lucanor wrote:I’m also trying to get the grip of what are the implications of what you said. There are several issues I would like to address, more with an inquisitive mind than trying to settle the matter completely.Yes, that’s how I’m approaching the topic. It’s unlikely that we will settle anything here but we can look more closely at definitional issues so that we’re not talking past each other. From what I’ve been reading it seems that anything with any level of intelligence, from the least to the most intelligent, will have an neural network which can build a model of its world, a sensory array and some level of sentience.
I agree that none of the today’s so called AIs are intelligent. And that is because, as Hawkings says, AI research has not been focused on producing AGI but on building inflexible machines that can only perform single tasks like playing chess, constructing sentences that look meaningful to us, or assembling components on a production line. Performing these tasks is not an indication of intelligence.
In order to build AGI, rather than focusing on building machines that can do just one thing better than a human such as play chess, research should be focused on producing machines that can learn to do many things the way that organic life forms can. The way to start is by building something much simpler than a human level intelligence. Building something equivalent to an artificial cockroach would be a major breakthrough. Unlike a Roomba or a chess playing machine, a cockroach has the flexibility to operate autonomously in many different environments, to learn, survive and reproduce without having to be reprogrammed by humans.
Based on my reading, I think such machines are possible and that, eventually, AGIs with human level intelligence, whose neural networks are built on the same principles as organic neural networks, will also be possible. I agree with Hawkins and other neuroscientist and computer scientists who believe this can be done. Organic neural networks can be understood. And what can be understood can be reproduced.
We can get into the neuroscientific details of how the brain works and how AGI could be built on brain principles if you like. I'd recommend reading the books by Jeff Hawkins and Max Bennet that I mentioned previously. They're good reads.
Pattern-chaser wrote: ↑November 30th, 2024, 7:56 amIn order to be a deception, I think you would agree that it must be able to deceive. If something meant to deceive isn't deceiving anyone, it can barely called a deception. What you are suggesting is like saying that a dildo is meant to deceive a woman that she has a penis and that this deception is transparent meaning she can tell that the dildo isn't a penis but she doesn't mind that fact and so she is fooled by the dildo.Empiricist-Bruno wrote: ↑November 28th, 2024, 6:07 pm I think AI is intelligent if you are willing to forget how it's being generated.No, it's not a paradox. It's a deliberate (and transparent) deception. (Current) AIs are implemented to look intelligent, not to be so.
If you want to take into account how "the intelligence" came about then, in my opinion, it's a paradox of intelligence. It can't be intelligence as it has no cognition and what it says only appears to be an opinion but it's not. You need a person to have an opinion don't you? So how can a robot hold an intelligent opinion? It does not and yet it does. Hence, it's a paradox.
amorphos_ii wrote: ↑December 17th, 2023, 11:49 amDildo Intelligence.
Should AI be called something else other than ‘intelligence’ to be correct.
Count Lucanor wrote: ↑December 3rd, 2024, 11:11 amYes. I agree with all of that.Lagayascienza wrote: ↑December 2nd, 2024, 10:59 pmI wish you had addressed some of my concerns in my previous post, but OK, let's see what is still in dispute and what is not, and restart from there.Count Lucanor wrote:My apologies for not answering promptly, it has been a busy week.No problem, Count Lucanor. I’ve appreciated the time-out so that I could read up more in order to find out what I think about consciousness, AI and AGI. It’s fascinating but I’m still trying to get across it all.Count Lucanor wrote:I’m also trying to get the grip of what are the implications of what you said. There are several issues I would like to address, more with an inquisitive mind than trying to settle the matter completely.Yes, that’s how I’m approaching the topic. It’s unlikely that we will settle anything here but we can look more closely at definitional issues so that we’re not talking past each other. From what I’ve been reading it seems that anything with any level of intelligence, from the least to the most intelligent, will have an neural network which can build a model of its world, a sensory array and some level of sentience.
I agree that none of the today’s so called AIs are intelligent. And that is because, as Hawkings says, AI research has not been focused on producing AGI but on building inflexible machines that can only perform single tasks like playing chess, constructing sentences that look meaningful to us, or assembling components on a production line. Performing these tasks is not an indication of intelligence.
In order to build AGI, rather than focusing on building machines that can do just one thing better than a human such as play chess, research should be focused on producing machines that can learn to do many things the way that organic life forms can. The way to start is by building something much simpler than a human level intelligence. Building something equivalent to an artificial cockroach would be a major breakthrough. Unlike a Roomba or a chess playing machine, a cockroach has the flexibility to operate autonomously in many different environments, to learn, survive and reproduce without having to be reprogrammed by humans.
Based on my reading, I think such machines are possible and that, eventually, AGIs with human level intelligence, whose neural networks are built on the same principles as organic neural networks, will also be possible. I agree with Hawkins and other neuroscientist and computer scientists who believe this can be done. Organic neural networks can be understood. And what can be understood can be reproduced.
We can get into the neuroscientific details of how the brain works and how AGI could be built on brain principles if you like. I'd recommend reading the books by Jeff Hawkins and Max Bennet that I mentioned previously. They're good reads.
We are not disputing the assertion that, in order for humans to develop a technology that can appropriately be called artificial intelligence, the corresponding biological systems must be understood and replicated.
We are also not disputing that theoretically, this is possible, but we cannot agree on whether this indeed will be achieved or not, since you see no unsurmountable obstacle in that path, while I support the view that in practice, human technical capabilities are not endless, so I propose that we have to wait to see and make assessments at any given moment, based on the available evidence of the state of the art.
As for now, we can both agree, humans have not achieved AI, nor will achieve it under the current computational paradigm (widely publicized as real AI by tech companies and AI enthusiasts in media) , which is fundamentally divorced of any reference to biological system and the physics or physiology involved.
Am I right, so far?
Count Lucanor wrote:OK, let's look now at neural networks. You insist that it is the key to finding intelligence. I could agree, however, we should leave aside the concept as conceived in the technological realm, where the term has become popular. In that domain, a neural network is reduced to a connection between (very simplified) virtual neurons. Actually, what we call neural network is just a broad concept of the complex biological systems in charge of the cognitive operations. That is, we are referring to the whole nervous system that in insects, birds, mammals, etc., comprises at least two subsystems: the Central Nervous System and the Peripheral Nervous System, this last one also including the Visceral Nervous System, which operates autonomously, without conscious control. They are all composed of many distinct parts, organs, etc., with their particular physiology and functions, and having important differences between species, which would explain then the so called spectrum of intelligence. The brain, of course, is one of those organs, but not the only one. As you see, it is practically the whole organism which conforms the neural network and which allows its agency and navigation through the world. This is consistent with the concept of embodied cognition.Ok. Based on my reading, I’ll try to formulate some provisional answers to those specific questions.
Now, still pending: what is it that we call "intelligence" in this whole nervous system? Is it the capability of behaving like an organism? If so, how do we map this against the spectrum of intelligence?
amorphos_ii wrote: ↑December 4th, 2024, 12:21 am A question then arises where we can ask if ‘mind’ whatever that is [it is there, its qualia are also noticeably present], would become a subjective thinking being – given the same instruction?Interesting speculation. IMO, the cognitive experience correlates with language. In the case of a possible AI, the experience can be broken down into energy inputs correlating with language that could resemble human language. However, the path of thinking about thinking is not (IMO) a clear path. It has to do with the self-regulating executive functions as it becomes “aware” of the Meta components in the evolution of the decision making. For example: What is the gratification concept that overcomes the logical method?
If say, we imagine mind to initially be like a space, then when a human brain connects up, there will be a way it does that. As all info can be accounted for, that only really leaves mental qualia as the means to interact or otherwise connect up.
Ergo, if a machine or apparatus does not produce mental qualia [don’t think computers nor AI chips do] then how can it connect to mind?
Lagayascienza wrote: ↑December 3rd, 2024, 11:02 pmFine, now we can focus on other related issues.Count Lucanor wrote: ↑December 3rd, 2024, 11:11 amYes. I agree with all of that.
Am I right, so far?
Lagayascienza wrote: ↑December 3rd, 2024, 11:02 pmThat definition could be fine, except that it clashes with some concepts you previously endorsed. If we take at least abstraction, logic, reasoning, creativity and critical thinking, those are usually associated with the presence of the human neocortex, so that leaves out all other mammalians, birds, insects, reptiles, fish, etc. At best, it only keeps the scope of intelligence within mammalians. However, all those other groups left out do have nervous systems and can be said to have neural networks. How come neural networks are what intelligence requires, but they are not intelligent? And then what happens to intelligence as a spectrum that went from insects to humans? I think you should review that definition.
Ok. Based on my reading, I’ll try to formulate some provisional answers to those specific questions.
1.) What is it that we call intelligence?
I’m okay with the usual definitions – the one on Wiki, for example, that speaks in terms of the capacity for abstraction, logic, understanding, self-awareness, learning, reasoning, planning, creativity, critical thinking, and problem-solving. And the ability to perceive or infer information; and to retain it as knowledge to be applied [flexibly] to adaptive behaviours within an environment or context. (I added “flexibly”)
Are you ok with that? For our purposes, I think that pretty much covers the field as a definition of intelligence.
Lagayascienza wrote: ↑December 3rd, 2024, 11:02 pm 2.) Is it the capability of behaving like an organism?An indicator is used as a clue about the presence of something. If that something is concrete, tangible, it is a very reliable way of confirming its presence. When it is about something more abstract or diffuse, it helps, but that does not imply it is as reliable as in the first case. A given behavior could give us a hint of intelligence if we have figured out what is intelligence, how it works and how it actually produces such behavior. The hypothetical examples you give about machines doing stuff on their own such as baking a fruit cake and helping the kids with school homework, are based on analogies of human behavior, but that does not eliminate the possibility of automation, it only illustrates a higher level of complexity in automation for doing more tasks. I doubt this is a crucial indicator of the presence of anything similar to what is happening within an organism (life, intelligence, sentience, agency, etc.). Remember also that you identified intelligence in far more simple organisms, none of which could do the tasks that you are using in your example. So, what is the behavioral clue of intelligence there?
Behaviour would be a some indication of intelligence. We test human performance with IQ tests. They might be of some value when we’re talking about AGIs with language that are purported to have human level intelligence. But a chess playing machine could not pass an IQ test. What about animals lower down on the intelligence spectrum that don’t have language? There are other tests for problem-solving ability, learning, and the flexible application of knowledge to novel problems in new situations that can be used to test in these cases. That's how we know that rats and other animals can learn stuff and apply knowledge to new problems and that they have some level of intelligence.
If I ask a machine which purportedly has a high level of general intelligence to make me a fruit cake and if it does so successfully, then I have to assume that it knows the concepts “cake”, “fruit”, “flour”, ‘mix’, ‘bake’,’ "oven’", "temperature", etcetera, and that it knows which fruit to use, and that, if it does not know these concepts, but goes away and learns them and then successfully bakes my fruit cake, and if after that I tell it to go clean itself up and then help my kid with his math and history homework, which it also does successfully, then I think it would be reasonable to grant that the machine is intelligent in the AGI sense. This behaviour would demonstrate that the machine has a sense of self, and that it can learn and flexibly apply its learning to new tasks. So, yes, I think behaviour would be a good indication. But if a machine can only do one thing, such as play chess, then, no, I don’t think that would demonstrate general intelligence even if it can beat every human at chess. That would demonstrate only that it can crunch data to perform one task. That is automation and not intelligence.
Lagayascienza wrote: ↑December 3rd, 2024, 11:02 pmActually my question was how do we map it against the different anatomical structures found in living beings, given their different configurations, functions, etc. The idea was that we needed to figure out how this works in biological systems before we tried to replicate them with human technology. Your answer seems to be going back again to what can be done artificially and trying to identify intelligence there by looking at clues from external behavior. That’s just more of the same approach that has gotten us where we are right now: no real AI, but just simulations.
3.) How do we map the spectrum of intelligence.
We look at the range of things an artificial neural network can do. When there is no intelligence, the range will be very limited and there will be no flexibility to lean autonomously and operate in novel situations and perform new tasks. To do so, a non-intelligent neural network would need to be reprogrammed by a human. It could not learn to do new things on its own because it is not intelligent.
Lagayascienza wrote: ↑December 3rd, 2024, 11:02 pm As mentioned, for animals other than humans, we also look at performance. We can test if a rat or cockroach can do things autonomously in novel situations - things such as learn the layout of a maze in order to find food. The range of things that the neural networks of a rat or a cockroach can learn to do will be more limited and less flexible than what an AGI with a more complex neural network of human level intelligence could do. But organisms with relatively simple neural networks would be the place to start in learning how to build AGI.Again, my point as a preamble to the question was: “neural networks” is the preferred term extracted from the simplified models of the tech world. In reality, neural networks, as we see in their original biological domain, point to quite a lot more. So how do we map the spectrum of intelligence to that? It has to be something more than: “more dense or complex connections of neurons”, which is, of course, as computer guys see the problem, a problem of computational power. The differences between insects and humans (assuming we are still embracing the notion of the intelligence spectrum running across all species between them) is not explained that way. Or is it?
Exactly how our own organic neural networks do what they do is still being worked out. Jeff Hawkins, in his book, A Thousand Brains: A New Theory of Intelligence, goes into this, and the other questions you have posed, in some detail. Max Bennet, in his book, A Brief History of Intelligence: Why the Evolution of the Brain Holds the Key to the Future of AI, also goes into these questions within an evolutionary framework.
How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023