Page 6 of 6

Re: AI and the Death of Identity

Posted: April 29th, 2023, 11:02 pm
by psycho
Sy Borg wrote: April 29th, 2023, 5:00 pm
Count Lucanor wrote: April 29th, 2023, 1:05 pm
Sy Borg wrote: April 29th, 2023, 3:00 am Still, Count, there is always the chance of emergence. For a long time, there was apparently no sentience in the biosphere.
I agree, but then the obvious question: was the emergence of life and sentience inevitable? For now, there's no sign of it anywhere else, so it's a good thing to be cautious and not rush into making predictions without firm evidence. The other question is: can all the results of natural emergence be reproduced artificially? Don't we humans have any limits in replicating nature?
TBH I'm not sure why we have sentience. It seems everything would be easier if we operated like philosophical zombies, like the heroes of action movies, who seem impervious to trauma and, often, fear. They just do what needs to be done without the fuss.

So why don't we real humans just generate quality output without the inefficiencies of emotionality, of suffering? Logically, the reason lies in our evolutionary history, with the emergence of brains. Yet a brain does not necessarily brings sentience. Larval tunicates have simple little brains that let them swim and find a rock to latch onto, after which they absorb the brain and lead a sessile, filter feeding lifestyle.

So, what is the evolutionary advantage of sentience, as compared with extremely efficient reflex responses of similar complexity?
In my opinion it was necessary for us to develop sentience if we abandoned our behavior to be reactive. Only reactive to stimuli.

Abandoning that our agency only depended on direct stimuli allows us to behave superflexibly and adapt to changes in circumstances immediately.

But without that motor for our agency, we would have no reason to act.

Being aware of our individuality allows us to set goals and possess goal-oriented agency.

The disadvantage is that each individual can structure their reality and social cohesion is lost.

Education, myths, religion, etc. they complete the belonging to groups and the orientation to group goals.

Re: AI and the Death of Identity

Posted: May 2nd, 2023, 3:55 pm
by Count Lucanor
Sy Borg wrote: April 29th, 2023, 5:00 pm
Count Lucanor wrote: April 29th, 2023, 1:05 pm
Sy Borg wrote: April 29th, 2023, 3:00 am Still, Count, there is always the chance of emergence. For a long time, there was apparently no sentience in the biosphere.
I agree, but then the obvious question: was the emergence of life and sentience inevitable? For now, there's no sign of it anywhere else, so it's a good thing to be cautious and not rush into making predictions without firm evidence. The other question is: can all the results of natural emergence be reproduced artificially? Don't we humans have any limits in replicating nature?


TBH I'm not sure why we have sentience. It seems everything would be easier if we operated like philosophical zombies, like the heroes of action movies, who seem impervious to trauma and, often, fear. They just do what needs to be done without the fuss.

So why don't we real humans just generate quality output without the inefficiencies of emotionality, of suffering? Logically, the reason lies in our evolutionary history, with the emergence of brains. Yet a brain does not necessarily brings sentience. Larval tunicates have simple little brains that let them swim and find a rock to latch onto, after which they absorb the brain and lead a sessile, filter feeding lifestyle.

So, what is the evolutionary advantage of sentience, as compared with extremely efficient reflex responses of similar complexity?
I guess it’s just nature being chaotic and purposeless, as it is supposed to be.

I’ve been thinking lately about this non-sentient “intelligence” promised by AI developers. On the positive side, when considering that our human failures at attempting to rationalize our societies find their origin in our senseless, irrational emotional life, when human stupidity seems to permeate all and ruin even our best, well-intentioned endeavors, an automated tool that can run most of our processes and eliminate that “human factor”, should be welcomed. Of course, still being just a tool in someone’s hands, there will always be the specter of the human pest floating around. More or less in the same line of thought, I know there have been attempts to introduce computational models in the decision-making processes of social and economic planning in socialist political projects, allowing a planned control of markets and the economy that was not possible before, by solving the “economic calculation problem”.
I’m sure these cybersocialists will find something appealing in the new AI developments.

Re: AI and the Death of Identity

Posted: May 2nd, 2023, 5:20 pm
by Sy Borg
Count Lucanor wrote: May 2nd, 2023, 3:55 pm
Sy Borg wrote: April 29th, 2023, 5:00 pm
Count Lucanor wrote: April 29th, 2023, 1:05 pm
Sy Borg wrote: April 29th, 2023, 3:00 am Still, Count, there is always the chance of emergence. For a long time, there was apparently no sentience in the biosphere.
I agree, but then the obvious question: was the emergence of life and sentience inevitable? For now, there's no sign of it anywhere else, so it's a good thing to be cautious and not rush into making predictions without firm evidence. The other question is: can all the results of natural emergence be reproduced artificially? Don't we humans have any limits in replicating nature?


TBH I'm not sure why we have sentience. It seems everything would be easier if we operated like philosophical zombies, like the heroes of action movies, who seem impervious to trauma and, often, fear. They just do what needs to be done without the fuss.

So why don't we real humans just generate quality output without the inefficiencies of emotionality, of suffering? Logically, the reason lies in our evolutionary history, with the emergence of brains. Yet a brain does not necessarily brings sentience. Larval tunicates have simple little brains that let them swim and find a rock to latch onto, after which they absorb the brain and lead a sessile, filter feeding lifestyle.

So, what is the evolutionary advantage of sentience, as compared with extremely efficient reflex responses of similar complexity?
I guess it’s just nature being chaotic and purposeless, as it is supposed to be.

I’ve been thinking lately about this non-sentient “intelligence” promised by AI developers. On the positive side, when considering that our human failures at attempting to rationalize our societies find their origin in our senseless, irrational emotional life, when human stupidity seems to permeate all and ruin even our best, well-intentioned endeavors, an automated tool that can run most of our processes and eliminate that “human factor”, should be welcomed. Of course, still being just a tool in someone’s hands, there will always be the specter of the human pest floating around. More or less in the same line of thought, I know there have been attempts to introduce computational models in the decision-making processes of social and economic planning in socialist political projects, allowing a planned control of markets and the economy that was not possible before, by solving the “economic calculation problem”.
I’m sure these cybersocialists will find something appealing in the new AI developments.
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.

As you pointed out, the strength of AI is its lack of human foibles, and its weakness is that it will be controlled by flawed humans.

I used to work as a data analyst, so managers have been using data models as the basis for policy for some time. Of course, the data analyst doing my old job now is a machine. Same dynamic, different conduit.

Where AI is super useful is dealing with complexity. For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.

Anyone with an interest in organising and controlling societies will find AI's progress appealing.

Re: AI and the Death of Identity

Posted: May 4th, 2023, 8:52 pm
by Count Lucanor
Sy Borg wrote: May 2nd, 2023, 5:20 pm
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.
Pain seems to be a very effective and efficient mechanism for signaling danger, which is directly linked to survival. OTOH, pleasure directly relates to benefits for the organism, so sentience makes sense as evolutionary adaptation.
Sy Borg wrote: May 2nd, 2023, 5:20 pm Where AI is super useful is dealing with complexity.
At least some types of complexity where humans can’t stand in the way. I think of all the hype about “smart cities” and its pretensions to automate human habitats as if they could be made to operate as perfectly synchroniced machines. Not even basic services, nor primary public infrastructure can be fully automated, although many things can and have been improved. We’ll just not reach the technological utopia that they’re promising. Cities are just too organic and chaotic to be controlled at that level. I’m watching with interest, though, to what’s coming with the latest wild urban experiments in Saudi Arabia, which are basically living machines made from scratch.
Sy Borg wrote: May 2nd, 2023, 5:20 pm For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.
Total control is what they would call totalitarianism. I agree we our far from that state of affairs from the point of view of state control and government. But there are some, evidently, in control, they belong to private corporations and their policy-oriented institutions, such as those in Brussels. Yanis Varoufakis was inside the wolf’s mouth and he told us how everything works inside, more reasons to fear the humans rather than the machines.

Re: AI and the Death of Identity

Posted: May 4th, 2023, 11:33 pm
by Sy Borg
Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.
Pain seems to be a very effective and efficient mechanism for signaling danger, which is directly linked to survival. OTOH, pleasure directly relates to benefits for the organism, so sentience makes sense as evolutionary adaptation.
It must have worked because here we are. Now that humans have reached this level of sentience, though, pain is looking like a very blunt instrument. For instance, if you fall down and have a compound fracture, with your shinbone poking through the skin, you hardly need mind-numbing pain that sends you into paroxysms to appreciate that you'd better not move and wait for help. Just enough pain. I like to think humans will find a way around this.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm Where AI is super useful is dealing with complexity.
At least some types of complexity where humans can’t stand in the way. I think of all the hype about “smart cities” and its pretensions to automate human habitats as if they could be made to operate as perfectly synchroniced machines. Not even basic services, nor primary public infrastructure can be fully automated, although many things can and have been improved. We’ll just not reach the technological utopia that they’re promising. Cities are just too organic and chaotic to be controlled at that level. I’m watching with interest, though, to what’s coming with the latest wild urban experiments in Saudi Arabia, which are basically living machines made from scratch.
"Smart cities" very much have the eel of a good idea in its infancy. Infants, of course, are incompetent at almost everything.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.
Total control is what they would call totalitarianism. I agree we our far from that state of affairs from the point of view of state control and government. But there are some, evidently, in control, they belong to private corporations and their policy-oriented institutions, such as those in Brussels. Yanis Varoufakis was inside the wolf’s mouth and he told us how everything works inside, more reasons to fear the humans rather than the machines.
If inflation goes up, politicians blame someone else, if it goes down, they take the credit - as i they were in control. I'm thinking of the most fundamental levels of control - unemployment, inflation, productivity, etc - not the detailed oppression of people.

As we agree, AI is a tool, and a powerful one. As with nukes, if that powerful tool is in the hands of a despot / totalitarian, a lot of misery will ran down on millions. Gun apologists say, 'Guns don't kill people. People kill people', which ignores the blindingly obvious fact that people with powerful tools kill a lot more people than those with primitive tools.

Re: AI and the Death of Identity

Posted: May 5th, 2023, 12:20 pm
by Count Lucanor
Sy Borg wrote: May 4th, 2023, 11:33 pm
Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.
Pain seems to be a very effective and efficient mechanism for signaling danger, which is directly linked to survival. OTOH, pleasure directly relates to benefits for the organism, so sentience makes sense as evolutionary adaptation.
It must have worked because here we are. Now that humans have reached this level of sentience, though, pain is looking like a very blunt instrument. For instance, if you fall down and have a compound fracture, with your shinbone poking through the skin, you hardly need mind-numbing pain that sends you into paroxysms to appreciate that you'd better not move and wait for help. Just enough pain. I like to think humans will find a way around this.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm Where AI is super useful is dealing with complexity.
At least some types of complexity where humans can’t stand in the way. I think of all the hype about “smart cities” and its pretensions to automate human habitats as if they could be made to operate as perfectly synchroniced machines. Not even basic services, nor primary public infrastructure can be fully automated, although many things can and have been improved. We’ll just not reach the technological utopia that they’re promising. Cities are just too organic and chaotic to be controlled at that level. I’m watching with interest, though, to what’s coming with the latest wild urban experiments in Saudi Arabia, which are basically living machines made from scratch.
"Smart cities" very much have the eel of a good idea in its infancy. Infants, of course, are incompetent at almost everything.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.
Total control is what they would call totalitarianism. I agree we our far from that state of affairs from the point of view of state control and government. But there are some, evidently, in control, they belong to private corporations and their policy-oriented institutions, such as those in Brussels. Yanis Varoufakis was inside the wolf’s mouth and he told us how everything works inside, more reasons to fear the humans rather than the machines.
If inflation goes up, politicians blame someone else, if it goes down, they take the credit - as i they were in control. I'm thinking of the most fundamental levels of control - unemployment, inflation, productivity, etc - not the detailed oppression of people.

As we agree, AI is a tool, and a powerful one. As with nukes, if that powerful tool is in the hands of a despot / totalitarian, a lot of misery will ran down on millions. Gun apologists say, 'Guns don't kill people. People kill people', which ignores the blindingly obvious fact that people with powerful tools kill a lot more people than those with primitive tools.
I would not cross out a comma, I agree completely.

Re: AI and the Death of Identity

Posted: May 8th, 2023, 5:58 am
by ConsciousAI
Gertie wrote: March 20th, 2023, 2:36 pmI think it will take wisdom and pausing for reflection to handle it well. And it worries me that in our capitalist world AI development will be spear-headed by tech moguls and huge corporations driven headlong by ego and profit. Governments and legislators, the people who we hope will look out for the rest of us, are already playing catch up.

I do like your idea of us humans becoming care-free toddlers with the drudgery and responsibility off our backs tho :). I think we still need goals to achieve self-worth, but maybe we can find better ones if we make it through the turmoil.
Interesting arguments. Do you expect AI to cause a reduction of scrutiny and social control potential when it concerns abuses and neglect of social interests of the kind that you describe?

I have noticed that VC's and big tech companies in Silicon Valley are (apparently) seriously advocating a basic income for every person. What do you think of that idea and do you believe that capitalist industry will continue to be motivated to provide such an income when their control of AI is secured or might it be an attempt of greenwashing as a marketing scheme to ease the introduction of AI while it is being matured?

(2023) We all contribute to AI — should we get paid for that?
techcrunch - com/2023/04/21/as-ai-eliminates-jobs-a-way-to-keep-people-afloat-financially-thats-not-ubi/

Google news overview: news - google - com/publications/CAAqMQgKIitDQklTR2dnTWFoWUtGR0poYzJsamFXNWpiMjFsTG05eVp5OXVaWGR6S0FBUAE?ceid=US:en&oc=3

It might be an utopic situation for philosophy. When humans are to be reduced to paid citizens that have as task only to seek meaning and purpose in life then that is vitally the evolutionary ground that made philosophy and science possible, in the most fertile condition.

Emanuel Kant created his greatest works relatively late in life after having made a sufficient financial basis to do so. He appears to have said to have regretted to not have started sooner (I thought to have read this once but an AI did not confirm this). Spinoza equally had to labor to earn time to spend on his philosophical work. One might ask what more Spinoza might have been able to accomplish when he had all time available to him. Similar questions might be valid in many area's of both science and society. When philosophy is the primary job in society it might result in an accumulation of intellectual progress which might be considered a higher purpose (greater good).

Re: AI and the Death of Identity

Posted: November 15th, 2023, 4:24 pm
by ConsciousAI
Leonodas wrote: March 18th, 2023, 11:21 pmJust to get to the point, my personal conclusion is this: the future is bright. This conclusion would mean that we must detach creation from our identity. To be human is simply to live according to what you wish, much as children do. Does a toddler care if their fingerpainting picture is actually "good", or did they enjoy creating the finger painting for creation's sake? I think we will see ourselves revert to a sense of childlike innocence, a proverbial return to the Garden of Eden, as it were. But maybe that's getting a little far in the weeds.
According to AI, scientific studies in diverse areas collectively suggest a strong connection between human identity and creativity, highlighting the multifaceted nature of creativity and its impact on human development and society.

What alternatives would you see possible to secure human prosperity in your described scenario, when humans cannot prosper by retreating to their infant mind in which doing anything for the sake of just doing is as good as it gets?