Page 7 of 13

Re: How would you Design a Humanoid ?

Posted: May 21st, 2022, 11:29 am
by Pattern-chaser
SteveKlinko wrote: May 21st, 2022, 10:19 am
Pattern-chaser wrote: May 21st, 2022, 9:16 am
SteveKlinko wrote: May 21st, 2022, 9:05 am Notice how they have to put the word "Scary" into the title.
If they are referring to machines that can modify their own code, and can therefore modify themselves, perhaps beyond recognition, is scary. It's scary because it's unpredictable, and could result in things that we did not intend, and do not find desirable: scary.

N.B. This scariness assumes that the machines in question are sufficiently connected, and thereby influential in the world, that they could possibly take actions that we would consider scary.
The machines are not modifying their own code so much, as just modifying Numbers in Memory for weighting the Neural net. The fact that they can do it much fast now for the initial Configuration of the Neural Net is the Cool thing. The fact that Neural Nets can be set up to continuously Reconfigure with new data is nothing new. Recursive error reducing Algorithms have been around for Hundreds of years. It used to be done through manual calculations. Nothing Scary nothing New, just better Pattern Matching. That's all Neural Nets do is perform a Pattern Matching function. No big Scary Intelligence is Emerging from any Technological or Software Singularity as popularized by the Snake Oil book writers.
I'm not a snake-oil book-writer, I'm a retired software designer. If the network is sharply and clearly defined, and the network itself can only modify coefficients, there should be no appreciable risk. But if the network itself can/could be reconfigured, the consequences are unpredictable ... and scary.

Re: How would you Design a Humanoid ?

Posted: May 21st, 2022, 2:46 pm
by SteveKlinko
Pattern-chaser wrote: May 21st, 2022, 11:29 am
SteveKlinko wrote: May 21st, 2022, 10:19 am
Pattern-chaser wrote: May 21st, 2022, 9:16 am
SteveKlinko wrote: May 21st, 2022, 9:05 am Notice how they have to put the word "Scary" into the title.
If they are referring to machines that can modify their own code, and can therefore modify themselves, perhaps beyond recognition, is scary. It's scary because it's unpredictable, and could result in things that we did not intend, and do not find desirable: scary.

N.B. This scariness assumes that the machines in question are sufficiently connected, and thereby influential in the world, that they could possibly take actions that we would consider scary.
The machines are not modifying their own code so much, as just modifying Numbers in Memory for weighting the Neural net. The fact that they can do it much fast now for the initial Configuration of the Neural Net is the Cool thing. The fact that Neural Nets can be set up to continuously Reconfigure with new data is nothing new. Recursive error reducing Algorithms have been around for Hundreds of years. It used to be done through manual calculations. Nothing Scary nothing New, just better Pattern Matching. That's all Neural Nets do is perform a Pattern Matching function. No big Scary Intelligence is Emerging from any Technological or Software Singularity as popularized by the Snake Oil book writers.
I'm not a snake-oil book-writer, I'm a retired software designer. If the network is sharply and clearly defined, and the network itself can only modify coefficients, there should be no appreciable risk. But if the network itself can/could be reconfigured, the consequences are unpredictable ... and scary.
You mean by adding more layers or nodes to the Network? That would just entail more Numbers to Configure. Nothing Scary. It would be new Data coming in that will Reconfigure the Net. The effects of this new Data would be completely knowable and predictable if computed by hand. I think it is the Speed, and that it probably could not really be computed by hand in any kind of usable timeframe, that is freaking people out. Computers have always been using new Data that is not known at Design Time and then setting Numbers in Memory to perform some kind of Adaptive Algorithm. Nothing Scary about the fact that Programmers have always not known what Data will be input to the Machine at Runtime. It is impossible to test for every sequence and value of every number that it will have to deal with. But it is pretty remarkable how much can be done with just better and better Pattern Matching.

Re: How would you Design a Humanoid ?

Posted: May 21st, 2022, 3:06 pm
by EricPH
SteveKlinko wrote: May 21st, 2022, 2:46 pm The effects of this new Data would be completely knowable and predictable if computed by hand.
Monkeys have hands, so how would that work?
I think it is the Speed, and that it probably could not really be computed by hand in any kind of usable timeframe, that is freaking people out.
How many billions of years would monkeys need if they are doing it by hand?

Re: How would you Design a Humanoid ?

Posted: May 21st, 2022, 5:49 pm
by AverageBozo
EricPH wrote: May 21st, 2022, 3:06 pm
SteveKlinko wrote: May 21st, 2022, 2:46 pm The effects of this new Data would be completely knowable and predictable if computed by hand.
Monkeys have hands, so how would that work?
I think it is the Speed, and that it probably could not really be computed by hand in any kind of usable timeframe, that is freaking people out.
How many billions of years would monkeys need if they are doing it by hand?
C’mon. Where Steve wrote “by hand” read “by human hand”. Or were you joking?

Re: How would you Design a Humanoid ?

Posted: May 21st, 2022, 6:40 pm
by UniversalAlien
SteveKlinko wrote: May 21st, 2022, 2:46 pm
Pattern-chaser wrote: May 21st, 2022, 11:29 am
SteveKlinko wrote: May 21st, 2022, 10:19 am
Pattern-chaser wrote: May 21st, 2022, 9:16 am

If they are referring to machines that can modify their own code, and can therefore modify themselves, perhaps beyond recognition, is scary. It's scary because it's unpredictable, and could result in things that we did not intend, and do not find desirable: scary.

N.B. This scariness assumes that the machines in question are sufficiently connected, and thereby influential in the world, that they could possibly take actions that we would consider scary.
The machines are not modifying their own code so much, as just modifying Numbers in Memory for weighting the Neural net. The fact that they can do it much fast now for the initial Configuration of the Neural Net is the Cool thing. The fact that Neural Nets can be set up to continuously Reconfigure with new data is nothing new. Recursive error reducing Algorithms have been around for Hundreds of years. It used to be done through manual calculations. Nothing Scary nothing New, just better Pattern Matching. That's all Neural Nets do is perform a Pattern Matching function. No big Scary Intelligence is Emerging from any Technological or Software Singularity as popularized by the Snake Oil book writers.
I'm not a snake-oil book-writer, I'm a retired software designer. If the network is sharply and clearly defined, and the network itself can only modify coefficients, there should be no appreciable risk. But if the network itself can/could be reconfigured, the consequences are unpredictable ... and scary.
You mean by adding more layers or nodes to the Network? That would just entail more Numbers to Configure. Nothing Scary. It would be new Data coming in that will Reconfigure the Net. The effects of this new Data would be completely knowable and predictable if computed by hand. I think it is the Speed, and that it probably could not really be computed by hand in any kind of usable timeframe, that is freaking people out. Computers have always been using new Data that is not known at Design Time and then setting Numbers in Memory to perform some kind of Adaptive Algorithm. Nothing Scary about the fact that Programmers have always not known what Data will be input to the Machine at Runtime. It is impossible to test for every sequence and value of every number that it will have to deal with. But it is pretty remarkable how much can be done with just better and better Pattern Matching.

Could Artificial Intelligence ever Surpass Humans?

Apr 5, 2022

See video here:
https://youtu.be/nIHGoJ3kYJE?list=UUI8g ... ILLPBrExMA
AI News
30.7K subscribers
The battle between artificial intelligence and human intelligence has been going on for a while not and AI is clearly coming very close to beating humans in many areas as of now. Partially due to improvements in neural network hardware and also improvements in machine learning algorithms. This video goes over whether and how humans could soon be surpassed by artificial general intelligence.
-----
Every day is a day closer to the Technological Singularity. Experience Robots learning to walk & think, humans flying to Mars and us finally merging with technology itself. And as all of that happens, we at AI News cover the absolute cutting edge best technology inventions of Humanity.
-----
TIMESTAMPS:
00:00 Is AGI actually possible?
01:11 What is Artificial General Intelligence?
03:34 What are the problems with AGI?
05:43 The Ethics behind Artificial Intelligence
08:03 Last Words

Re: How would you Design a Humanoid ?

Posted: May 21st, 2022, 8:53 pm
by Sy Borg
Pattern-chaser wrote: May 21st, 2022, 7:12 am
Sy Borg wrote: May 16th, 2022, 8:04 pm Once reason is abandoned, there can be only war - be it physical, political or social. When emotion conquers reason, there can be no discussion, no working through issues, only hostility and the destruction of one's enemies. I like to think that reflexive, mindless lunacy can be overcome.

...

Pattern-chaser wrote: May 19th, 2022, 10:46 am Is black 'better' than white? Is up 'better' than down? Is yin actually meaningful without yang?

Rationality and emotion work together to do what we do. Neither is 'superior' - both are essential. Imbalance between the two, though, is ... not optimal. I believe this latter sentence to express Sy Borg's position? 🤔
Sy Borg wrote: May 19th, 2022, 8:12 pm I have been telling you for days now that balance is optimal - and that society is a long way from balancing intellect and emotions at present. If you re-read my posts - in context - you will see that, right from the start, I was obviously seeking balance. It's obvious that emotions are what lies behind motivation, at least at this stage.
Yes, balance is optimal. But your presentation seems to have been more extreme than that. You refer to reason being "abandoned", not 'out of balance'; you describe how emotion "conquers" reason; you trash emotion, describing it as "reflexive, mindless lunacy". This is neither balanced nor optimal. But yes, balance is optimal, and a worthwhile aim.
No, I pointed out the lunacy to bring into sharp relief the tendency towards extreme emotionality that conquers reason. I obviously would not present emotion per se in that way. Why would I? My intent was always clear. Without emotion there is no motivation, thus no functionality. With too much emotion there is chaotic motivation, thus poor functionality.

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 5:43 am
by UniversalAlien
Sy Borg wrote: May 21st, 2022, 8:53 pm
Pattern-chaser wrote: May 21st, 2022, 7:12 am
Sy Borg wrote: May 16th, 2022, 8:04 pm Once reason is abandoned, there can be only war - be it physical, political or social. When emotion conquers reason, there can be no discussion, no working through issues, only hostility and the destruction of one's enemies. I like to think that reflexive, mindless lunacy can be overcome.

...

Pattern-chaser wrote: May 19th, 2022, 10:46 am Is black 'better' than white? Is up 'better' than down? Is yin actually meaningful without yang?

Rationality and emotion work together to do what we do. Neither is 'superior' - both are essential. Imbalance between the two, though, is ... not optimal. I believe this latter sentence to express Sy Borg's position? 🤔
Sy Borg wrote: May 19th, 2022, 8:12 pm I have been telling you for days now that balance is optimal - and that society is a long way from balancing intellect and emotions at present. If you re-read my posts - in context - you will see that, right from the start, I was obviously seeking balance. It's obvious that emotions are what lies behind motivation, at least at this stage.
Yes, balance is optimal. But your presentation seems to have been more extreme than that. You refer to reason being "abandoned", not 'out of balance'; you describe how emotion "conquers" reason; you trash emotion, describing it as "reflexive, mindless lunacy". This is neither balanced nor optimal. But yes, balance is optimal, and a worthwhile aim.
No, I pointed out the lunacy to bring into sharp relief the tendency towards extreme emotionality that conquers reason. I obviously would not present emotion per se in that way. Why would I? My intent was always clear. Without emotion there is no motivation, thus no functionality. With too much emotion there is chaotic motivation, thus poor functionality.
What I have to ask, as you debate the purpose and functions of Human emotion - is whether emotion is in fact a prime,
a necessity :?:

Remember we are talking about "How would you Design a Humanoid ?" - Does a Humanoid, defined here as an artificial {completely created by Man} life form really need emotion :?: Even in biological normal Humans is emotion necessary for the human organism to function :?:

If I am going to design a Humanoid do I really want it to have emotions :?: Ethics and a sense of right and wrong maybe, but why make
it emotional :?:

Sometimes I think that emotions, like religious fanaticism, is a biological defect, an evolutionary mistake that leads otherwise sane
Humans to kill, destroy, and strangle the lives of otherwise innocent Humans, such as women in the United States being hounded
by 'womb sniffers' who believe, and do so quite emotionally, that it is gods will that they save each and every unborn, and often unwanted baby :!:

Of course there are other cases - But would this emotional regression occur if Humans could free themselves from their
often sick emotions :?: And before Man can evolve either biologically or through advanced AI, doesn't the emotional baggage need
to be removed :?:

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 8:00 am
by SteveKlinko
UniversalAlien wrote: May 21st, 2022, 6:40 pm
SteveKlinko wrote: May 21st, 2022, 2:46 pm
Pattern-chaser wrote: May 21st, 2022, 11:29 am
SteveKlinko wrote: May 21st, 2022, 10:19 am
The machines are not modifying their own code so much, as just modifying Numbers in Memory for weighting the Neural net. The fact that they can do it much fast now for the initial Configuration of the Neural Net is the Cool thing. The fact that Neural Nets can be set up to continuously Reconfigure with new data is nothing new. Recursive error reducing Algorithms have been around for Hundreds of years. It used to be done through manual calculations. Nothing Scary nothing New, just better Pattern Matching. That's all Neural Nets do is perform a Pattern Matching function. No big Scary Intelligence is Emerging from any Technological or Software Singularity as popularized by the Snake Oil book writers.
I'm not a snake-oil book-writer, I'm a retired software designer. If the network is sharply and clearly defined, and the network itself can only modify coefficients, there should be no appreciable risk. But if the network itself can/could be reconfigured, the consequences are unpredictable ... and scary.
You mean by adding more layers or nodes to the Network? That would just entail more Numbers to Configure. Nothing Scary. It would be new Data coming in that will Reconfigure the Net. The effects of this new Data would be completely knowable and predictable if computed by hand. I think it is the Speed, and that it probably could not really be computed by hand in any kind of usable timeframe, that is freaking people out. Computers have always been using new Data that is not known at Design Time and then setting Numbers in Memory to perform some kind of Adaptive Algorithm. Nothing Scary about the fact that Programmers have always not known what Data will be input to the Machine at Runtime. It is impossible to test for every sequence and value of every number that it will have to deal with. But it is pretty remarkable how much can be done with just better and better Pattern Matching.

Could Artificial Intelligence ever Surpass Humans?

Apr 5, 2022

See video here:
https://youtu.be/nIHGoJ3kYJE?list=UUI8g ... ILLPBrExMA
AI News
30.7K subscribers
The battle between artificial intelligence and human intelligence has been going on for a while not and AI is clearly coming very close to beating humans in many areas as of now. Partially due to improvements in neural network hardware and also improvements in machine learning algorithms. This video goes over whether and how humans could soon be surpassed by artificial general intelligence.
-----
Every day is a day closer to the Technological Singularity. Experience Robots learning to walk & think, humans flying to Mars and us finally merging with technology itself. And as all of that happens, we at AI News cover the absolute cutting edge best technology inventions of Humanity.
-----
TIMESTAMPS:
00:00 Is AGI actually possible?
01:11 What is Artificial General Intelligence?
03:34 What are the problems with AGI?
05:43 The Ethics behind Artificial Intelligence
08:03 Last Words
A good video. But it gives the impression that Consciousness itself can be programmed into Machines. They actually don't know what they are talking about, and I think they add the Consciousness thing to keep people interested. They don't know what they are talking about because nobody knows what Consciousness actually is. I think they claimed that Consciousness (an Unkown) will be programmed into Machines if only the right Algorithms could be developed for the Neural Nets. The deceptive part is that they hint there is actually an approach to even doing this. That is Pure Science Fiction Fantasy. Ok, let's just say it: They are Lying.

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 9:01 am
by Pattern-chaser
Pattern-chaser wrote: May 21st, 2022, 11:29 am I'm not a snake-oil book-writer, I'm a retired software designer. If the network is sharply and clearly defined, and the network itself can only modify coefficients, there should be no appreciable risk. But if the network itself can/could be reconfigured, the consequences are unpredictable ... and scary.

SteveKlinko wrote: May 21st, 2022, 2:46 pm You mean by adding more layers or nodes to the Network? That would just entail more Numbers to Configure. Nothing Scary. It would be new Data coming in that will Reconfigure the Net. The effects of this new Data would be completely knowable and predictable if computed by hand.
It doesn't work that way. By adding new "layers" or "nodes" to the network, you change its function.

By the way, if you add nodes to a network, you also add connections (between them). The function/operation of the network is a sort of synthesis of the two. Adding to the function of the network - i.e. changing the function of the network - is much, much more than just "more Numbers to Configure". And the main problem with AI networks, and such matters, is that sometimes we can't see or understand the pitfalls in our suppositions. If a network is able to change its own function, it is unpredictable and, potentially, Very Scary Indeed.

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 9:09 am
by SteveKlinko
Pattern-chaser wrote: May 22nd, 2022, 9:01 am
Pattern-chaser wrote: May 21st, 2022, 11:29 am I'm not a snake-oil book-writer, I'm a retired software designer. If the network is sharply and clearly defined, and the network itself can only modify coefficients, there should be no appreciable risk. But if the network itself can/could be reconfigured, the consequences are unpredictable ... and scary.

SteveKlinko wrote: May 21st, 2022, 2:46 pm You mean by adding more layers or nodes to the Network? That would just entail more Numbers to Configure. Nothing Scary. It would be new Data coming in that will Reconfigure the Net. The effects of this new Data would be completely knowable and predictable if computed by hand.
It doesn't work that way. By adding new "layers" or "nodes" to the network, you change its function.

By the way, if you add nodes to a network, you also add connections (between them). The function/operation of the network is a sort of synthesis of the two. Adding to the function of the network - i.e. changing the function of the network - is much, much more than just "more Numbers to Configure". And the main problem with AI networks, and such matters, is that sometimes we can't see or understand the pitfalls in our suppositions. If a network is able to change its own function, it is unpredictable and, potentially, Very Scary Indeed.
If we are talking about Neural Nets then adding more nodes does not change any functionality other than making the Pattern Matching more efficient or more precise. What functions are you talking about. Neural Nets only do Pattern Matching. Whatever higher functionality the Neural Nets are use for, the fact remains the essence of what the Neural Net does is always Pattern matching. Maybe you can change the higher level code to change the higher level functionality, but adding nodes to the Neural Net only affects Pattern Matching resolution and efficiency.

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 9:43 am
by Pattern-chaser
SteveKlinko wrote: May 22nd, 2022, 9:09 am If we are talking about Neural Nets then adding more nodes does not change any functionality...
Forgive me for asking, but do you have any familiarity at all with networks in general, or of AI neural nets in particular? Have you ever programmed such a thing? Have you ever been involved in the design of such a thing? You write like an informed hobbyist, although I truly intend no insult when I say that. Your understanding of networks seems ... incomplete, am I right?

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 1:15 pm
by SteveKlinko
Pattern-chaser wrote: May 22nd, 2022, 9:43 am
SteveKlinko wrote: May 22nd, 2022, 9:09 am If we are talking about Neural Nets then adding more nodes does not change any functionality...
Forgive me for asking, but do you have any familiarity at all with networks in general, or of AI neural nets in particular? Have you ever programmed such a thing? Have you ever been involved in the design of such a thing? You write like an informed hobbyist, although I truly intend no insult when I say that. Your understanding of networks seems ... incomplete, am I right?
You are wrong. I actually was involved in configuring Neural Nets at one point in my career as a Multi Disciplinary Research Engineer, Philosopher, and Adventurer. Nice try to invalidate me. I don't know everything about everything, so what is it that you think I don't know?

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 1:32 pm
by Pattern-chaser
SteveKlinko wrote: May 22nd, 2022, 9:09 am If we are talking about Neural Nets then adding more nodes does not change any functionality...
Pattern-chaser wrote: May 22nd, 2022, 9:43 am Forgive me for asking, but do you have any familiarity at all with networks in general, or of AI neural nets in particular? Have you ever programmed such a thing? Have you ever been involved in the design of such a thing? You write like an informed hobbyist, although I truly intend no insult when I say that. Your understanding of networks seems ... incomplete, am I right?
SteveKlinko wrote: May 22nd, 2022, 1:15 pm You are wrong. ... Nice try to invalidate me.
Wow, you live in a dark world. I wasn't trying to prove you wrong, I was wondering where your understanding of networks, and their operation, came from.


SteveKlinko wrote: May 22nd, 2022, 1:15 pm I actually was involved in configuring Neural Nets at one point in my career as a Multi Disciplinary Research Engineer, Philosopher, and Adventurer. ... what is it that you think I don't know?
How networks function. It sounds like you have configured networks that someone else has designed and built, just as someone might create a spreadsheet using the program (Excel or similar) that someone else designed and implemented.

When you add nodes to a network, you aren't just making it 'the same but bigger'. You are introducing the possibility of computation(s) that were not possible without your additions. The change is large and radical, not just a magnification of what was there before.



For clarity: I am not an expert in networks, neural or otherwise. But I did spend a lifetime designing software of all types, and also kept up-to-date with my peers and my profession. So I have read articles in the trade press, and conversed with colleagues (at trade events, and the like) who did/do have this expertise.

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 2:43 pm
by SteveKlinko
Pattern-chaser wrote: May 22nd, 2022, 1:32 pm
SteveKlinko wrote: May 22nd, 2022, 9:09 am If we are talking about Neural Nets then adding more nodes does not change any functionality...
Pattern-chaser wrote: May 22nd, 2022, 9:43 am Forgive me for asking, but do you have any familiarity at all with networks in general, or of AI neural nets in particular? Have you ever programmed such a thing? Have you ever been involved in the design of such a thing? You write like an informed hobbyist, although I truly intend no insult when I say that. Your understanding of networks seems ... incomplete, am I right?
SteveKlinko wrote: May 22nd, 2022, 1:15 pm You are wrong. ... Nice try to invalidate me.
Wow, you live in a dark world. I wasn't trying to prove you wrong, I was wondering where your understanding of networks, and their operation, came from.


SteveKlinko wrote: May 22nd, 2022, 1:15 pm I actually was involved in configuring Neural Nets at one point in my career as a Multi Disciplinary Research Engineer, Philosopher, and Adventurer. ... what is it that you think I don't know?
How networks function. It sounds like you have configured networks that someone else has designed and built, just as someone might create a spreadsheet using the program (Excel or similar) that someone else designed and implemented.

When you add nodes to a network, you aren't just making it 'the same but bigger'. You are introducing the possibility of computation(s) that were not possible without your additions. The change is large and radical, not just a magnification of what was there before.



For clarity: I am not an expert in networks, neural or otherwise. But I did spend a lifetime designing software of all types, and also kept up-to-date with my peers and my profession. So I have read articles in the trade press, and conversed with colleagues (at trade events, and the like) who did/do have this expertise.
Very funny. You say I write like informed hobbyist. In the context of the discussion, you are accusing me of being a Hobbyist when it comes to Engineering. That sounds like an attempt at Invalidation to me. I don't think that was a complement. If it was, then Thank You for the compliment. I thought my title would give you a laugh, but I guess you missed the absurdity of the whole thing. But I was bid on a contract as a Multidisciplinary Research Engineer one time.

Re: How would you Design a Humanoid ?

Posted: May 22nd, 2022, 9:08 pm
by Sy Borg
UniversalAlien wrote: May 22nd, 2022, 5:43 amWhat I have to ask, as you debate the purpose and functions of Human emotion - is whether emotion is in fact a prime,
a necessity :?:
That is a question for Pattern-Chaser, not me.