Page 14 of 31

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 11:38 am
by Pattern-chaser
Sy Borg wrote: October 26th, 2024, 6:52 pm Emergence happens over time, and I think it will again when it comes to AI.
Pattern-chaser wrote: October 27th, 2024, 7:23 am Emergence can only occur, I think, when the relevant skill, or the potential for it, already exists. Do AIs have the potential for intelligence or understanding? Not today, no. Tomorrow remains to be seen.
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 3:11 pm
by Sy Borg
Pattern-chaser wrote: October 28th, 2024, 11:38 am
Sy Borg wrote: October 26th, 2024, 6:52 pm Emergence happens over time, and I think it will again when it comes to AI.
Pattern-chaser wrote: October 27th, 2024, 7:23 am Emergence can only occur, I think, when the relevant skill, or the potential for it, already exists. Do AIs have the potential for intelligence or understanding? Not today, no. Tomorrow remains to be seen.
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Pay attention! You are a programmer and speaking as if you are unaware of the approaching singularity - the point where AI designs better AI than humans can.

I will assume you had a brain glitch and that you are well aware that AI will reach a level where the AI it builds will be more of sophisticated than AI that humans can build. Once that happens, exponential development would seem possible.

Are you confident that you can predict the outcome of that exponential process? Autonomous self-improving AI seems likely to start a whole new round of evolution. Rocks and chemistry evolved to the point that made biology possible, and biology has evolved to a point that has made whatever AI will become possible.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 3:38 pm
by Sy Borg
AI is obviously intelligent - you ask it questions and it answers appropriately. Currently that intelligence is very limited. Every year, it will become less limited.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 28th, 2024, 10:49 pm
by Lagayascienza
Count Lucanor Nowhere did I say that AI is currently intelligent, conscious or capable of feeling anything. I have said that current AI exhibits some of the processes and behaviours commonly associated with intelligence. I said further that there is no reason to think that building AIs housed in sensate bodies and capable of intelligence and consciousness are, in principle, impossible.

Sculptor 1, you did mention simulation. You said
Sculptor 1 wrote:The evolution which led to more sophisicated forms of intelligence is driven and moderated by feelings. Feelings cannot be simulated in a machine.
You brought up "simulated" feelings. Not me.

I get the impression that some people are just opposed to the very idea of artificial intelligence. They just flatly state that it is impossible. I think this is incorrect. And I think that the computational theory of mind is more likely to be true than other proposals. If I am right, then all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. In which case, further progress towards AI that is truly intelligent can be made. If it is untrue that intelligence and mind have a physiological basis, then they will be forever mysterious. However, we already know a lot about the physiological basis of intelligence and mind and so I do not think the mysterians are right.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 3:29 am
by Sy Borg
Lagayascienza wrote: October 28th, 2024, 10:49 pmI get the impression that some people are just opposed to the very idea of artificial intelligence. They just flatly state that it is impossible. I think this is incorrect.
Yes, it seems that some want to make a point against AI boosterism. Trouble is, those who are interested and curious seem to be wrongly assumed to be following technocratic sci-fi wet dreams.

As you know, and relate to, I'm just interested in the story - starting from simple basalt and obsidian in the early volcanic Earth evolving to today. There have been a few pivotal events - the first oceans, abiogenesis, multicellularity, sentience, humanity and now, it seems, AI ... or whatever may emerge from AI.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 8:58 am
by Pattern-chaser
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
Pattern-chaser wrote: October 28th, 2024, 11:38 am I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Sy Borg wrote: October 28th, 2024, 3:11 pm Pay attention! You are a programmer and speaking as if you are unaware of the approaching singularity - the point where AI designs better AI than humans can.
And you are not a software designer, so I understand your ignorance. AI cannot currently design anything. It can be used as a design tool, just as a compiler can, but that is a figurative light-year away from AIs themselves doing the 'designing'.

But you continue to ignore the point I have made many times about AI:
Sy Borg wrote: October 28th, 2024, 3:11 pm I will assume you had a brain glitch and that you are well aware that AI will reach a level where the AI it builds will be more of sophisticated than AI that humans can build. Once that happens, exponential development would seem possible.

Are you confident that you can predict the outcome of that exponential process? Autonomous self-improving AI seems likely to start a whole new round of evolution. Rocks and chemistry evolved to the point that made biology possible, and biology has evolved to a point that has made whatever AI will become possible.
AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 9:00 am
by Pattern-chaser
Sy Borg wrote: October 28th, 2024, 3:38 pm AI is obviously intelligent - you ask it questions and it answers appropriately.
AI is obviously not intelligent - you ask it questions, and it answers as it has been programmed to. It is impossible for it to do otherwise.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 9:11 am
by Steve3007
It's interesting to skim through the replies in this topic. A common theme seems to be the difference between the state of affairs right now and the likely state in the future, given current trends. Perhaps it is because of the present-tense wording of the original question that some people seem to take the view that no, AI is not intelligent now and that it's impossible to say what will happen in the future. That seems to me an odd attitude. Of course it's impossible to know with certainty what will happen in the future, but it's always possible to make predictions based on currently existing trends. We couldn't live without the ability to do that.

Of course, my view of what will happen with AI in the future depends on the continued existence of human life with the continued development of the relevant technologies towards ever greater complexity. If that stops then clearly nothing happens. But if it doesn't stop I think the development of manufactured objects with genuine intelligence, emotions, feelings, consciousness, sentience, etc is highly probable. My views there appear to be similar to those of Sy Borg and Lagayascienza.

I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.

Also, unless you believe that the human brain is literally infinitely complex (a very non-materialistic view to hold, since infinity is an abstract concept) then I think, if you're rational, you must take the view that a manufactured object could be equally complex at a finite amount of time in the future. The human brain is, for sure, extremely complex. But if it has a large but finite amount of complexity, and if manufactured objects can be made to increase in complexity with time, then I see no logical way to deny that such objects could be as complex as human brains a finite time into the future.


Incidentally, for the past year I've been doing a masters degree in AI. That's one reason why I didn't post here for a while, as I was quite busy juggling that with my job. A lot of what you learn on the course is a more rigorous and in-depth look at aspects of AI that most interested casual readers are already aware of. I think most people who are interested are already aware of the general principles on which artificial neural networks operate. So I don't think the course necessarily gives the student much greater philosophical insight into the subject. But it is fun to play with different ANN architectures. If you're willing to learn a bit of the Python programming language, you can create a Google Colab account and start using the AI library called Keras to start designing neural networks pretty quickly. I recommend giving it a try!

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 10:08 am
by Steve3007
Pattern-chaser wrote:AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
A computer program that can modify its own code is a pretty trivial thing to write and I'm sure it's been done many times before. I don't see anything about self-modifying code per se that makes it revolutionary or dangerous.

On the subject of self-modification in AI: I'd say that modifying the weights in the neurons is a similar idea, and of course neural networks modify their own neurons in order to learn. You might say that so long as the code which describes the design of the neurons themselves is not self-modifying then the NN can't do anything genuinely creative, or something like that. But to me that's like saying that so long as a human being can't modify the operation of the laws of physics which describe the way our bodies and brains work, we can't do anything creative. I'd disagree.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 10:32 am
by Pattern-chaser
Steve3007 wrote: October 29th, 2024, 9:11 am Incidentally, for the past year I've been doing a masters degree in AI.
That's handy for us, then! 😃 I have 40 years of experience designing software, and a professional awareness of AI, and the progress of AI software. But nothing specific or detailed.

What do you think about allowing AI to modify its own programming? Do you think that would be wise?

Ooo, it seems you've replied while I was writing this post:
Pattern-chaser wrote:AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
Steve3007 wrote: October 29th, 2024, 10:08 am A computer program that can modify its own code is a pretty trivial thing to write and I'm sure it's been done many times before. I don't see anything about self-modifying code per se that makes it revolutionary or dangerous.
Agreed. But the programs you are speaking of do not have the possibility to achieve what AI might, in the future. They are simple and contained programs, they have to be. Otherwise, the consequences of their existence could be as serious as that of autonomous AI. Self-modifying code is not testable. Not all possibilites can be tested, and their operation confirmed, because there are too many possibilities to test. So we would have to release some potentially world-rocking code without a clue as to what might happen.

If we were unlucky, an undiscovered bug might upset the apple-cart. And that is nothing (directly) to do with AI or self-modifying code.


Steve3007 wrote: October 29th, 2024, 10:08 am On the subject of self-modification in AI: I'd say that modifying the weights in the neurons is a similar idea, and of course neural networks modify their own neurons in order to learn. You might say that so long as the code which describes the design of the neurons themselves is not self-modifying then the NN can't do anything genuinely creative, or something like that. But to me that's like saying that so long as a human being can't modify the operation of the laws of physics which describe the way our bodies and brains work, we can't do anything creative. I'd disagree.
Neural networks do approach some sort of autonomy, I think. As you say, they can 'learn', and modfy their "neurons" accordingly. If that kind of autonomy was programmed into AI, with connections to the internet, and (e.g.) power distribution infrastructure, and so on, then the possilibilites are .... endless. And not all of those possibilities benefit humanity. Autonomous AI is no longer under human control. This opens the way for a sci-fi horror, "We've built a monster!!!" 😱😭

It may not turn out that way, of course. But we released dingoes into Australia's ecosystem, and we exploded nuclear fission bombs without a clue as to the consequences of releasing all that radiation, and the deadly-poisonous radioactive by-products, into our environment. Our history gives good reason to be nervous, and cautious too, I think.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 12:05 pm
by Steve3007
Pattern-chaser wrote:That's handy for us, then! 😃 I have 40 years of experience designing software, and a professional awareness of AI, and the progress of AI software. But nothing specific or detailed.
Maybe a bit handy, but perhaps not quite as handy as you might think. As I said, one of the things I've taken away from studying the subject formally is that it doesn't necessarily give you much more insight into the philosophical issues around the subject than you'd get from reading about it informally, if you already have some programming background.

I learnt a bit about the electro-chemical workings of biological neurons, then learnt about the way that artificial neurons are designed, how they're put together in networks, numerous different kinds of networks and different uses of AI. Learnt how to create a simple neural network from scratch, then how to use existing libraries (Keras, Tensorflow) to build more complex NNs, to do the various kinds of things that they're currently used for and the various aspects of real brain function that some of them seek to emulate. Chose a dissertation subject to research and got quite deeply into that. etc.

But despite all that, the deepest philosophical question ("Is it possible in principle to manufacture a conscious entity?") is not something that you cover much in an AI MSc course. At least not this one. There was a "philosophy of AI" module on offer, and I would have chosen it, but they withdrew it. Possibly because the university (of Kent) decided to shut down their philosophy department this year! (Don't get me started on that one!)
Agreed. But the programs you are speaking of do not have the possibility to achieve what AI might, in the future. They are simple and contained programs, they have to be. Otherwise, the consequences of their existence could be as serious as that of autonomous AI. Self-modifying code is not testable. Not all possibilites can be tested, and their operation confirmed, because there are too many possibilities to test. So we would have to release some potentially world-rocking code without a clue as to what might happen.
The existence of too many possibilities to practicably test is not a particular characteristic of software that can modify its own executable code. It applies to any software with extreme complexity. Complex deep neural networks (large networks with one or more hidden layers of neurons between the input and output layers) take in vast quantities of data and perform vast numbers of calculations on that data in order to propagate it forward through the network according to the weights on the inputs to the neurons and to propagate adjustments to those weights back through the network. They are "black boxes" - non-deterministic for all practical purposes - because of that complexity. Not because they modify their own executable code. And there is randomness at the heart of the whole system. All adjustments to weights, and decisions as to whether a given neuron should fire a signal to the next neuron, are probabilistic. Of course, those probabilistic calculations are based on the the generation of psuedo-random numbers. But, as I said in a previous post, there's no reason why those pseudo-random number algorithms couldn't be replaced by the output from some kind of quantum event - making them as genuinely random as anything in the universe can be said to be.

So I don't think it's self-modifying code that's the issue when it comes to trying to predict what these things will do. It's extreme complexity mixed with a big dose of randomness. That's the main takeaway from studying the subject - the vast complex arrays of data being processed, with huge quantities of computing power. Hence the sudden huge market for GPUs and the sudden 7-fold increase in the share price of NVidia.

Running large neural networks on Google Colab is interesting because it makes it easy to compare running on a CPU with running on various kinds of GPU. The speed increase is amazing. One of the neural networks I designed for my dissertation project took about 40 minutes to train when run on a CPU and something like 10 seconds on one of the GPUs. I guess it would have been difficult to predict many years ago that the mathematics of 3D graphics (matrix transformations), and the public's love of 3D games, would result in hardware that would benefit AI, because it relies on the same matrix/tensor mathematics. But I guess the history of scientific/technological advances is filled with these unforeseen crossovers.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 12:31 pm
by Steve3007
Pattern-chaser wrote:So we would have to release some potentially world-rocking code without a clue as to what might happen.
Putting aside my quibbling with you about the importance or otherwise of self-modifying code, we could talk generally about software whose behaviour is, for all practical purposes, unpredictable, whether that's due to self-modification or extreme complexity mixed with randomness or whatever. And yes, that seems on the face of it like a disturbing thing. A large part of my day job (and I think used to be part of yours too) is trying to design software that is predictable, because it's a tool for doing a job, and we want tools to behave in the same way each time we use them in the same way. But when you're seeking to design something that emulates some aspects of the way creative beings like humans act, you don't necessarily want complete predictability. Human behaviour isn't entirely predictable. But it isn't entirely random and unpreditable either. It's complex.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 2:05 pm
by Sculptor1
Count Lucanor wrote: October 28th, 2024, 10:44 am
Lagayascienza wrote: October 26th, 2024, 1:23 am None of the above is to say that there are not important architectural and processing differences between biological computers and non-biological computers. For a good article and commentary about these differences see "10 Important Differences Between Brains and Computers" at Science Blogs.

There definitely are some important differences in size and complexity and processing but, as one commentator said, none of those differences prove that computers cannot eventually be built that could house sentience? We are certainly nowhere near being able to build computers with brain-like complexity housed in a sensate body which could do everything a human could do. But the difference in our current, comparatively simple, non-biological computers do not demonstrate that it is impossible to eventually construct sentient, intelligent computers.
The expression of a common fallacy: “if something has not been proven to be false, then there’s a hint that it is true”. OTOH, if something has not been proven to be true, then it has not been proven to be true. And if something has been proven to be false, then it is false. To my understanding, it has been proven that the statement “AI is intelligent” is false. Also, “the mind is a digital computer” is false.
I basically concur except to say that the the truth level of the last two statements is mitgated by a sort of convenience of usage.
1) Clearly AI uses the word "intelligent". So the idea that Artificial intelligence is not intelligent might be somewhat incongrueous until you actually think about what we mean by the term "intelligent", and
2) The idea that you can employ the analogy of a digital computer to help describe the workings of intelligence has it uses.
So in the same way energy balance, calorie intake and storage can employ the analogy of a fridge (glycogen)and Freezer(body fat), so too can we talk about Software/Hardware RAM and ROM as proxies for long tern and short term memory - eventhough the human system of consciousness has nothing of the kind.

In that all lanaguage is metaphor such devices are necessary though not sufficient to get our full understanding.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 3:28 pm
by Sy Borg
Pattern-chaser wrote: October 29th, 2024, 8:58 am
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
Pattern-chaser wrote: October 28th, 2024, 11:38 am I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Sy Borg wrote: October 28th, 2024, 3:11 pm Pay attention! You are a programmer and speaking as if you are unaware of the approaching singularity - the point where AI designs better AI than humans can.
And you are not a software designer, so I understand your ignorance. AI cannot currently design anything. It can be used as a design tool, just as a compiler can, but that is a figurative light-year away from AIs themselves doing the 'designing'.
Actually, I have coded (elementary level) in machine language, BASIC and Javascript, and I have also worked in UAT, trying to fix an absolute beast of a legal application, designed by lawyers, with all the unnecessary detail that that situation entails. So, I am not unfamiliar with the concepts, making your claim about my "ignorance" was both unwarranted and incorrect.

Further, your claim is wrong.
Pattern-chaser wrote: October 29th, 2024, 8:58 am But you continue to ignore the point I have made many times about AI:
Sy Borg wrote: October 28th, 2024, 3:11 pm I will assume you had a brain glitch and that you are well aware that AI will reach a level where the AI it builds will be more of sophisticated than AI that humans can build. Once that happens, exponential development would seem possible.

Are you confident that you can predict the outcome of that exponential process? Autonomous self-improving AI seems likely to start a whole new round of evolution. Rocks and chemistry evolved to the point that made biology possible, and biology has evolved to a point that has made whatever AI will become possible.
AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
If we are to do any serious work in space, autonomous self-improving robots will be essential. as Steve said, work is already being done to that end:
In recent developments that are nothing short of groundbreaking, Google DeepMind has unveiled a revolutionary advancement known as "Promptbreeder (PB): Self-referential Self-Improvement through Accelerated Evolution." This innovation represents a significant leap in the world of Artificial Intelligence (AI), as it enables AI models to evolve and improve themselves at a pace billions of times faster than human evolution.
https://newo.ai/the-evolution-of-self-i ... -learning/

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: October 29th, 2024, 3:39 pm
by Count Lucanor
Steve3007 wrote: October 29th, 2024, 9:11 am
I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.
First, that’s a false dilemma. As once noted by Searle, this argument (considering its context) implies that the question of whether the brain is a physical mechanism that determines mental states or not, is exactly the same question of whether the brain is a digital computer or not. But they are not the same question, so while the latter should be answered with a NO, the former should be answered with a YES. That means one can deny that computational theory solves the problem of intelligence, while at the same time keeping the door close to any dualism of the sort you’re talking about. Secondly, even though trying to emulate brain operation stays within the problem of emulating a physical system, human technical capabilities are not infinite, so we can’t predict it will happen. Now, if researchers committed to achieving that result were focused on that goal, even if they had to discard trending approaches that do not actually work, so as to try with other technologies, we could at least hope that they will achieve it some day, but the fact is that they’re only trying the path set by Turin and others, that is, the path of the computational theory of mind. That path is a dead end, it doesn’t take us to where is promised.