Page 22 of 31
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 11th, 2024, 10:16 am
by The Beast
The possibility of Hume’s delirium being replicated is (IMO) undesirable. But to a Scientist with the necessary “passion” to speculate, the underlying landscape of the brain is one of beauty and abstract art. In the words of Hume taking as a premise: “simple perceptions combine to form complex perceptions in ways that explain human thought, belief, feeling and action”. Seeing how simple perceptions give different complex perceptions in different brains, I am not sure that simple perceptions give the same complex perceptions in the same brain at all instances. Similarly, the teleportation of hydrogen is one of the “energy state” and not the physical state. Maybe we can speculate on a vision of a virtual self that might or not be a copy of the physical self. The semantic possibilities are endless and could end in complex states of delirium.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 11th, 2024, 11:32 am
by Steve3007
Count Lucanor wrote:Steve3007 wrote:Count Lucanor wrote:There are dividing lines in a continuum. That’s what makes possible that a given range in the electromagnetic spectrum gives you deadly radiation, while others don’t. Some will give you visible light and others don’t.
No, the objective existence of discrete dividing lines is not what makes that possible. Stating that something is a continuum is not the same as saying that every part of it is the same. It just means that the changes are continuous and not discrete.
Dividing lines doesn’t have to be discrete, countable integers. And it doesn’t matter that the changes are continuous, as exemplified by the electromagnetic spectrum. That doesn’t stop us from identifying and characterizing what happens at different degrees or points within any spectrum. We can even identify ranges or “zones” based on given sets of properties. We do it all the time, from historical periodization to separation of parts within a whole.
I don't want to get bogged down in semantics, but a dividing line is ... a line. A dividing line is, by definition, a discrete boundary - a discontinuity - a point where the slope of the graph is undefined, etc. My original point, to which you replied, was that these
discrete lines are placed by us according to our purposes, with the example of the evolution of modern humans.
Count Lucanor wrote:Steve3007 wrote:You can simulate water molecules on a computer and you'll never get physical water molecules, so obviously you (a person in the real world outside of the computer) will never get wet. But whatever properties emerge from the collective behaviour of water molecules can, in principle, also emerge in the simulation. So, as I said, emergent properties of both biological and non-biological systems can also emerge within the simulation.
Nope, the supposedly emergent properties of simulated water cannot be the real, physical, emergent properties of water, unless we thought that emergence is a function of the algorithms, of the software running the natural program, not of the physical properties themselves.
That’s where the mistake is...
I presume "Nope" and "That's where the mistake is" means you disagree with what I said above.
So, as I understand it, your position is that properties of individual water molecules (for example) can be simulated but
emergent properties of those molecules can't? Emergent properties are collective properties. They're properties that emerge from the collective behaviour of the constituent parts but don't exist in those parts individually. Given all this, your position makes no sense to me. It's probably best to check that I've understood that position before continuing.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 11th, 2024, 7:00 pm
by Count Lucanor
Steve3007 wrote: ↑November 11th, 2024, 11:32 am
Count Lucanor wrote:Steve3007 wrote:Count Lucanor wrote:There are dividing lines in a continuum. That’s what makes possible that a given range in the electromagnetic spectrum gives you deadly radiation, while others don’t. Some will give you visible light and others don’t.
No, the objective existence of discrete dividing lines is not what makes that possible. Stating that something is a continuum is not the same as saying that every part of it is the same. It just means that the changes are continuous and not discrete.
Dividing lines doesn’t have to be discrete, countable integers. And it doesn’t matter that the changes are continuous, as exemplified by the electromagnetic spectrum. That doesn’t stop us from identifying and characterizing what happens at different degrees or points within any spectrum. We can even identify ranges or “zones” based on given sets of properties. We do it all the time, from historical periodization to separation of parts within a whole.
I don't want to get bogged down in semantics, but a dividing line is ... a line. A dividing line is, by definition, a discrete boundary - a discontinuity - a point where the slope of the graph is undefined, etc. My original point, to which you replied, was that these discrete lines are placed by us according to our purposes, with the example of the evolution of modern humans.
Count Lucanor wrote:Steve3007 wrote:You can simulate water molecules on a computer and you'll never get physical water molecules, so obviously you (a person in the real world outside of the computer) will never get wet. But whatever properties emerge from the collective behaviour of water molecules can, in principle, also emerge in the simulation. So, as I said, emergent properties of both biological and non-biological systems can also emerge within the simulation.
Nope, the supposedly emergent properties of simulated water cannot be the real, physical, emergent properties of water, unless we thought that emergence is a function of the algorithms, of the software running the natural program, not of the physical properties themselves.
That’s where the mistake is...
I presume "Nope" and "That's where the mistake is" means you disagree with what I said above.
So, as I understand it, your position is that properties of individual water molecules (for example) can be simulated but emergent properties of those molecules can't? Emergent properties are collective properties. They're properties that emerge from the collective behaviour of the constituent parts but don't exist in those parts individually. Given all this, your position makes no sense to me. It's probably best to check that I've understood that position before continuing.
I have two types of objections:
1) You can simulate how hydrogen and oxygen atoms bond to form water molecules and how these molecules interact with each other with selected parameters that are relevant to the model, but without further human programming, that “water” will not start behaving like water with all of its emergent properties (temperature, adhesion, cohesion, etc.), all by itself, because of emergent properties outside of the model. Even if you went further and simulated the behavior of molecules that creates heat, for example, and you attached to your model a simulated subsystem that is sensitive to heat, it would not react to the accelerated motion of water molecules as the emergent property heat. Of course you could introduce the parameter heat to affect the subsystem, but it would not emerge by itself from the movement of molecules. You need the property itself, not the simulation of what makes the property to emerge, to affect the heat-sensitive system.
2) Even if you managed to cause the emergence of properties from the simulation, those virtual properties cannot have any effect on any other physical system sensitive to the physical properties being simulated. You could simulate electricity, but that will not power anything. You could simulate water, but that will not wet anything. You could simulate intelligence or consciousness, but that will not transfer the qualities of intelligence to an inanimate, lifeless object. That’s why no one believes that a pocket calculator is intelligent in math.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 11th, 2024, 9:48 pm
by Count Lucanor
Steve3007 wrote: ↑November 11th, 2024, 11:32 am
I don't want to get bogged down in semantics, but a dividing line is ... a line. A dividing line is, by definition, a discrete boundary - a discontinuity - a point where the slope of the graph is undefined, etc. My original point, to which you replied, was that these discrete lines are placed by us according to our purposes, with the example of the evolution of modern humans.
You’re giving a special meaning to the word “discrete” that I’m struggling to agree with in this context. Discrete values of anything are countable integers of it, with no middle ground. A continuum has non-discrete values, but it has values anyway, such as the wavelengths of visible light in the electromagnetic spectrum. Time is continuous, but that is not contradicted by the divisions of years, days, hours, etc. You seem to think that the idea of measurement itself means we are pointing to discrete values, but that is not the case. Now, you can say that wavelengths, days and years are somehow arbitrary conventions, but so are our abstractions about continuums. When talking about life forms, pointing to a continuum is a mere abstraction of a number of discrete entities. So, there should be no trouble in acknowledging the objective validity of our classifications, such as the one that divides life forms in orders, families, genus, species and so on. There are distinct separations among them. And clearly the human species has some unique features that justify our separation from the rest of animals and the rest of nature.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 11th, 2024, 11:40 pm
by Steve3007
Count Lucanor wrote:You’re giving a special meaning to the word “discrete” that I’m struggling to agree with in this context...
No, I'm using standard definitions of terms such as "discrete" and "dividing line".
Count Lucanor wrote:I have two types of objections:...
To what? To something that I've said?
Temperature (to take an example of an emergent property that you've mentioned) is a measure of the root mean square velocities of the individual molecules. If the movements of molecules can be simulated then their temperature can be simulated.
I think you're confused as to how simulations work and are objecting to things that I haven't said without really reading what I
have said. When it gets to that stage I don't think there's much point in continuing (I don't want to just keep repeating myself or quoting what I've said previously), but it's been an interesting conversation. Thanks.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 12th, 2024, 1:17 am
by Lagayascienza
A lot has been said in this thread so far and it's difficult to keep track of who agrees or disagrees with what and why.
There seem to be two camps. Firstly, there are those who think AGI is possible. Then there are those who think it is impossible. The impossibilists think there will always be something mysterious or spooky about consciousness and intelligence. I don't agree with that because intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood and, eventually, emulated in artificial brains. Therefore, I think AGI is possible in principle.
However, to make it a reality we will first need to understand how "meat makes mind". Once we understand that, and can emulate it, the prospects for AGI are endless. And there is work being done right now on understanding how our brains do what they do.
When we can build it, AGI will be no more of a simulation that our own intelligence and consciousness. Artificial brains will build models of the world just as we do, and their brains will be flexible and capable of learning in a similar way to our own brains.
The current crop of AIs are nowhere near being able to do this. But, eventually, brains built on the same principles as organic brains will be able to achieve AGI. We have no reason to think that this is impossible.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 12th, 2024, 11:44 am
by The Beast
Basically, two views.
In whatever explanatory example:
The deterministic view might use a Laplace’s demon to predict the gas heat from the mean kinetic energy of its molecules (human action is part of the collective variables) with the use of bridge laws that link vocabularies (a diachronic model of explanatory reduction). Really.
However, exotic phenomena.
It is caused (IMO) to the breaking of symmetries therefore spatially unpredictable moreover, complexities in micro modelling microphenomena such as quantum bounce makes the “crawling of the micro-causal web” unthinkable.
In the case of emergence due to unprogrammed functionality ( AI) , it is an interesting discussion relative to simulations that have non-reductive explanations… I enjoyed many.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 12th, 2024, 1:48 pm
by Count Lucanor
Steve3007 wrote: ↑November 11th, 2024, 11:40 pm
Count Lucanor wrote:You’re giving a special meaning to the word “discrete” that I’m struggling to agree with in this context...
No, I'm using standard definitions of terms such as "discrete" and "dividing line".
Just in case, I looked it up. It is exactly as I said.
Steve3007 wrote: ↑November 11th, 2024, 11:40 pm
Count Lucanor wrote:I have two types of objections:...
To what? To something that I've said?
You asked me to clarify what I was disagreeing with. I responded.
Steve3007 wrote: ↑November 11th, 2024, 11:40 pm
Temperature (to take an example of an emergent property that you've mentioned) is a measure of the root mean square velocities of the individual molecules. If the movements of molecules can be simulated then their temperature can be simulated.
Yes, sure you can program and simulate the motion of molecules, but that will not make heat emerge “naturally”.
Steve3007 wrote: ↑November 11th, 2024, 11:40 pm
I think you're confused as to how simulations work and are objecting to things that I haven't said without really reading what I have said. When it gets to that stage I don't think there's much point in continuing (I don't want to just keep repeating myself or quoting what I've said previously), but it's been an interesting conversation. Thanks.
Your choice. I agree you are repeating yourself, but that’s because instead of addressing the arguments against your statements, you just double down on your statements.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 12th, 2024, 3:06 pm
by Count Lucanor
Lagayascienza wrote: ↑November 12th, 2024, 1:17 am
A lot has been said in this thread so far and it's difficult to keep track of who agrees or disagrees with what and why.
There seem to be two camps. Firstly, there are those who think AGI is possible. Then there are those who think it is impossible. The impossibilists think there will always be something mysterious or spooky about consciousness and intelligence. I don't agree with that because intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood and, eventually, emulated in artificial brains. Therefore, I think AGI is possible in principle.
However, to make it a reality we will first need to understand how "meat makes mind". Once we understand that, and can emulate it, the prospects for AGI are endless. And there is work being done right now on understanding how our brains do what they do.
When we can build it, AGI will be no more of a simulation that our own intelligence and consciousness. Artificial brains will build models of the world just as we do, and their brains will be flexible and capable of learning in a similar way to our own brains.
The current crop of AIs are nowhere near being able to do this. But, eventually, brains built on the same principles as organic brains will be able to achieve AGI. We have no reason to think that this is impossible.
A third camp could be opened: the one of the realists, who see that current AI technology, based on the computational theory of mind, cannot achieve real intelligence, nor agency. Even worse, the tech companies and their engineers are not looking anywhere else, among other things, because they are only interested in what is more profitable in the short run. They are selling snake oil.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 12th, 2024, 8:38 pm
by Lagayascienza
Count Lucanor wrote: ↑November 12th, 2024, 3:06 pm
Lagayascienza wrote: ↑November 12th, 2024, 1:17 am
A lot has been said in this thread so far and it's difficult to keep track of who agrees or disagrees with what and why.
There seem to be two camps. Firstly, there are those who think AGI is possible. Then there are those who think it is impossible. The impossibilists think there will always be something mysterious or spooky about consciousness and intelligence. I don't agree with that because intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood and, eventually, emulated in artificial brains. Therefore, I think AGI is possible in principle.
However, to make it a reality we will first need to understand how "meat makes mind". Once we understand that, and can emulate it, the prospects for AGI are endless. And there is work being done right now on understanding how our brains do what they do.
When we can build it, AGI will be no more of a simulation that our own intelligence and consciousness. Artificial brains will build models of the world just as we do, and their brains will be flexible and capable of learning in a similar way to our own brains.
The current crop of AIs are nowhere near being able to do this. But, eventually, brains built on the same principles as organic brains will be able to achieve AGI. We have no reason to think that this is impossible.
A third camp could be opened: the one of the realists, who see that current AI technology, based on the computational theory of mind, cannot achieve real intelligence, nor agency. Even worse, the tech companies and their engineers are not looking anywhere else, among other things, because they are only interested in what is more profitable in the short run. They are selling snake oil.
I am a realist. AGI will be different to current AI which is inflexible and can do only one, or a few, things, and which cannot learn new things, and has no “mental” model of the world and no awareness. AGI will be able to sense the world around it, form a mental model of the world, be flexible, be able to learn new things and have awareness.
Whether we call what it and biological brains do “computation” is irrelevant. AGI will operate in a similar way to a biological brain.
There is no reason to think that this is impossible. It will be difficult, and will take time, but I’m confident that it will happen because intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood.
There is some very interesting work being done by neuroscientists that is yielding insights into how our brains and those of other animals do what they do. Understanding brains is what is needed so that they can be emulated to eventually produce AGI.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 12th, 2024, 11:16 pm
by Count Lucanor
Lagayascienza wrote: ↑November 12th, 2024, 8:38 pm
Count Lucanor wrote: ↑November 12th, 2024, 3:06 pm
Lagayascienza wrote: ↑November 12th, 2024, 1:17 am
A lot has been said in this thread so far and it's difficult to keep track of who agrees or disagrees with what and why.
There seem to be two camps. Firstly, there are those who think AGI is possible. Then there are those who think it is impossible. The impossibilists think there will always be something mysterious or spooky about consciousness and intelligence. I don't agree with that because intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood and, eventually, emulated in artificial brains. Therefore, I think AGI is possible in principle.
However, to make it a reality we will first need to understand how "meat makes mind". Once we understand that, and can emulate it, the prospects for AGI are endless. And there is work being done right now on understanding how our brains do what they do.
When we can build it, AGI will be no more of a simulation that our own intelligence and consciousness. Artificial brains will build models of the world just as we do, and their brains will be flexible and capable of learning in a similar way to our own brains.
The current crop of AIs are nowhere near being able to do this. But, eventually, brains built on the same principles as organic brains will be able to achieve AGI. We have no reason to think that this is impossible.
A third camp could be opened: the one of the realists, who see that current AI technology, based on the computational theory of mind, cannot achieve real intelligence, nor agency. Even worse, the tech companies and their engineers are not looking anywhere else, among other things, because they are only interested in what is more profitable in the short run. They are selling snake oil.
I am a realist. AGI will be different to current AI which is inflexible and can do only one, or a few, things, and which cannot learn new things, and has no “mental” model of the world and no awareness. AGI will be able to sense the world around it, form a mental model of the world, be flexible, be able to learn new things and have awareness.
Using your own terminology, you seem like a possibilist, not a realist. Realists looks at what is in front of them and make the best assessment with it. A realist is not speculating about the future without a firm foot in the present.
Lagayascienza wrote: ↑November 12th, 2024, 8:38 pm
Whether we call what it and biological brains do “computation” is irrelevant. AGI will operate in a similar way to a biological brain.
Once again, predictions without any base in the present.
Lagayascienza wrote: ↑November 12th, 2024, 8:38 pm
There is no reason to think that this is impossible.
Assuming that it was possible, that does not imply that it necessarily will happen. How do we jump from speculating to asserting something?
Lagayascienza wrote: ↑November 12th, 2024, 8:38 pm
It will be difficult, and will take time, but I’m confident that it will happen because intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood.
Just being confident does not guarantee that something will happen or will be achieved. The reason you give for being confident doesn’t work either, because technical achievements depend on actual human capabilities, resources, etc., not on the physical nature of the problem. If you believe humans will necessarily solve some day all physical problems, that’s equivalent to a belief in the infinite powers of humans.
Lagayascienza wrote: ↑November 12th, 2024, 8:38 pm
There is some very interesting work being done by neuroscientists that is yielding insights into how our brains and those of other animals do what they do. Understanding brains is what is needed so that they can be emulated to eventually produce AGI.
Of course, a first step in trying to replicate intelligence is to understand how it is produced by living beings. We are still babies with diapers in that field and we don’t know what walls we will hit. I see one of them in studying isolated brains, but anyway, if the problem is solved, being technically able to replicate it is another issue.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 13th, 2024, 1:41 am
by Lagayascienza
Right. I am an AGI possibilist based on work that has been done, and is currently being done, by neuroscientists and computer scientists. Of course, there is much still to do, but the literature leads me to believe that a start has been made on a new approach to understanding intelligence and consciousness. Yes, there will be hurdles. They will be overcome. It has taken over 70 years to go from ENIAC to the smart gadgets we have today (none of them truly intelligent and certainly not conscious) and I suspect it will be at least another 70 years before we get anywhere near AGI.
Intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood and when they are, we will be able to emulate them and build AGI. Many once believed that fast, heavier than air flight was impossible because it had never been done. But it got done because it was possible and people were willing to do the work to make it happen. AGI will get done because it is possible and enough smart people believe it can be done and are working towards it.
Current AI is to AGI what hot air balloons were to the 5th generation fighter jets of today.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 13th, 2024, 4:55 am
by Sy Borg
Over time, things change, thresholds are breached, and emergence happens. There's no reason why everything in the world is subject to emergence and AI isn't.
AI is about information processing. We already know that certain levels of interconnectedness in certain configurations results in the emergence of consciousness because that is our own story. It's pretty clear that, over time, the level of complex interconnectedness will be sufficient to trigger conscious experience. Whether self-improving AI finds an advantage in experiencing in the future or not will determine the configurations it chooses.
As per the above, these are very early days.
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 13th, 2024, 11:07 am
by Count Lucanor
Lagayascienza wrote: ↑November 13th, 2024, 1:41 am
Right. I am an AGI possibilist based on work that has been done, and is currently being done, by neuroscientists and computer scientists.
What computer scientists (not endorsing the computational theory of mind) are working on AGI?
Lagayascienza wrote: ↑November 13th, 2024, 1:41 am
Of course, there is much still to do, but the literature leads me to believe that a start has been made on a new approach to understanding intelligence and consciousness.
I’m eager to know what this new, unprecedented approach, is. I hope it is aligned with my general conviction that consciousness (or intelligence) cannot be understood as the function of an organ, but of the organism as a whole. That also means that at least part of the research program must necessarily focus on the interplay between the organic processes of the body as a whole and the qualitative experience of the self.
Lagayascienza wrote: ↑November 13th, 2024, 1:41 am
Intelligence and consciousness emerge from physical processes in physical brains and these physical processes can be understood and when they are, we will be able to emulate them and build AGI.
There’s no guarantee we will understand it. That’s problem #1. Even if we did (and since there’s no guarantee, any prediction is wishful thinking), having the technical capabilities and resources to replicate it is problem #2. It is theoretically possible that I travel to the Amazon with friends one day and capture a huge anaconda. Does that guarantee that it will happen? Of course not.
Lagayascienza wrote: ↑November 13th, 2024, 1:41 am
Many once believed that fast, heavier than air flight was impossible because it had never been done. But it got done because it was possible and people were willing to do the work to make it happen. AGI will get done because it is possible and enough smart people believe it can be done and are working towards it.
Current AI is to AGI what hot air balloons were to the 5th generation fighter jets of today.
I have said before that such argument implies an untempered belief on the infinite technical capabilities of humans, as if there’s only one way forward of unstoppable technical progress that will lead to everything being solved, given time. The truth is we can tell what we have achieved, but that gives no clue to what else will be achieved. Current AI is to AGI what bird-imitating flapping wings were in the first days of artificial flight. To this day, we have not been able to replicate the mechanics involved in the flight of birds, even though we understand the aerodynamics, for the very simple reason that scale becomes a factor. We pursued another strategies that worked for human flight. One could argue that we could do that with intelligence, but the problem is: there are many ways to fly, but is there any other way of being conscious? One could argue in that sense that we have achieved THAT OTHER WAY of intelligence with our pocket calculators and the first ENIAC, but then, why the unnecessary analogies with human intelligence and with agency driven by natural intelligence?
Re: Is AI ‘intelligent’ and so what is intelligence anyway?
Posted: November 13th, 2024, 8:53 pm
by Lagayascienza
I am unable to post links here but these provide a taste of what I have been reading lately:
A Thousand Brains: A New Theory of Intelligence, Jeff Hawkins, March 2, 2021, Basic Books
“These Living Computers Are Made from Human Neurons”, Scientific American, 8 August, 2024
“How (and why) to think that the brain is literally a computer”, Front. Comput. Sci., 09 September 2022
“Neural tuning instantiates prior expectations in the human visual system”, Nature Communications, 1 Sept, 2023
“The computational power of the human brain”, Frontiers in Cellular Neuroscience, 7 August 2023
Maybe if you broadened your conception of "computation", which you seem to associate only with present day computers, you would not be so dogmatically impossiblist. What computers currently do is a very limited form of computation which I agree is never likely to achieve intelligence or consciousness. Neuroscientist and computer scientist, Jeff Hawkins, explains that what is needed is a better understanding of the brain and the processes which occur therein so that those processes can be emulated.