Steve3007 wrote: ↑October 29th, 2024, 9:11 am
Incidentally, for the past year I've been doing a masters degree in AI.
That's handy for us, then!
I have 40 years of experience designing software, and a professional awareness of AI, and the progress of AI software. But nothing specific or detailed.
What do you think about allowing AI to modify its own programming? Do you think that would be wise?
Ooo, it seems you've replied while I was writing this post:
Pattern-chaser wrote:AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
Steve3007 wrote: ↑October 29th, 2024, 10:08 am
A computer program that can modify its own code is a pretty trivial thing to write and I'm sure it's been done many times before. I don't see anything about self-modifying code per se that makes it revolutionary or dangerous.
Agreed. But the programs you are speaking of do not have the possibility to achieve what AI might, in the future. They are simple and
contained programs, they have to be. Otherwise, the consequences of their existence could be as serious as that of
autonomous AI. Self-modifying code is not testable. Not all possibilites can be tested, and their operation confirmed, because there are too many possibilities to test. So we would have to release some potentially world-rocking code without a clue as to what might happen.
If we were unlucky, an undiscovered
bug might upset the apple-cart. And that is nothing (directly) to do with AI or self-modifying code.
Steve3007 wrote: ↑October 29th, 2024, 10:08 am
On the subject of self-modification in AI: I'd say that modifying the weights in the neurons is a similar idea, and of course neural networks modify their own neurons in order to learn. You might say that so long as the code which describes the design of the neurons themselves is not self-modifying then the NN can't do anything genuinely creative, or something like that. But to me that's like saying that so long as a human being can't modify the operation of the laws of physics which describe the way our bodies and brains work, we can't do anything creative. I'd disagree.
Neural networks do approach some sort of autonomy, I think. As you say, they can 'learn', and modfy their "neurons" accordingly. If that kind of autonomy was programmed into AI, with connections to the internet, and (e.g.) power distribution infrastructure, and so on, then the possilibilites are .... endless. And not all of those possibilities benefit humanity. Autonomous AI is no longer under human control. This opens the way for a sci-fi horror, "We've built a monster!!!"
☠
It may not turn out that way, of course. But we released dingoes into Australia's ecosystem, and we exploded nuclear fission bombs without a clue as to the consequences of releasing all that radiation, and the deadly-poisonous radioactive by-products, into our environment. Our history gives good reason to be nervous, and cautious too, I think.