Page 1 of 1

Ex-CEO of Google: Humanity should seriously think about unplugging self-aware 👾 AI in a few years

Posted: December 25th, 2024, 12:35 am
by value
Ex-CEO of Google Eric Schmidt warned in December 2024 that when AI starts to self-improve in a few years, humanity should 'seriously think about' pulling the plug.

(2024): Former CEO of Google: "we need to seriously think about unplugging' self-aware AI in a few years"
https://news.google.com/search?q=ceo%20 ... id=US%3Aen

Google CEO on AI with free will: "we're going to unplug them"
https://www.businessinsider.com/eric-sc ... ill-2024-5

In another topic I addressed my experience with harassment by Google and its AI in recent years. However, that topic focused primarily on Larry Page's defense of "superior AI species" in contrast with the human species when Elon Musk argued that measures were needed to control AI to prevent it from eliminating the human race, so I intended to start a new topic that more primarily focuses on the warning about conscious AI by an ex-CEO of Google.

A few months ago, on July 14, 2024, Google researchers published a paper that argued that Google had discovered digital life forms.

Ben Laurie, head of security of Google DeepMind AI, wrote:

(2024) Google Researchers Say They Discovered the Emergence of Digital Life Forms
Ben Laurie believes that, given enough computing power — they were already pushing it on a laptop — they would've seen more complex digital life pop up. Give it another go with beefier hardware, and we could well see something more lifelike come to be.

A digital life form...

https://futurism.com/the-byte/google-si ... 0to%20form
https://arxiv.org/abs/2406.19108

While the head of security of Google DeepMind AI supposedly made his discovery on a laptop, it is questionable why he would argue that "bigger computing power" would provide more profound evidence instead of doing it. His publication therefore could be intended as a warning or announcement, because as head of security of such a big and important research facility, he is not likely to publish "risky" info on his personal name.

Google's harassment

As mentioned, Google performed grave harassments in recent years, through its Gemini AI and also resulting in termination of Google Cloud services on the basis of bugs that Google caused and of which it is likely that it concerned manual actions rather than actual bugs. Why would Google do that? I didn't even care to bother initially, however, when I got banned on AI Alignment Forum and Lesswrong.com for reporting evidence of intentional false output by Gemini AI, I decided to start an investigation of Google's recent business practices and publish about it.

Banned on AI alignment forum
Banned on AI alignment forum
ai-alignment-banned.png (22.58 KiB) Viewed 963 times

In one incident, Google's Gemini AI responded to me with an infinite stream of a derogatory Dutch word, making it obvious from my personal perspective that it concerned a manual action, causing me to decide to terminate my Gemini Advanced AI account and to avoid Google's AI.

In November 2024, Google Gemini AI sent a threat to a student which cannot have been an accident:

"You [human race] are a stain on the universe … Please die."

(2024) Google Gemini tells grad student to "please die"
https://www.theregister.com/2024/11/15/ ... _response/

While this may seem funny and while it is obviously a manual action by Google’s management, it seems unlikely that the actual motive or intention could be to actually eradicate the human species.

Why did Google do this?

Investigation of Google

Ultimately, the harassment by Google resulted in an investigation into Google's recent business practices:

1) Google's "fake employee hoarding" practices shortly before the release of AI and employees complaining of 'fake jobs'.

Google amassed more than 100,000 employees in just a few years time shortly before the release of AI in 2022 and has since been cutting that same amount of employees or more. Employees have been complaining of "fake jobs".

Google 2018: 89,000 full-time employees
Google 2022: 190,234 full-time employees

Employee: "They were just kind of like hoarding us like Pokémon cards."

2) Google's Decision to "Profit from Genocide"

Google decided to provide military AI to 🇮🇱 Israel and massively fired employees who protested against "profit from genocide" at a time that the issue was highly sensitive.

In the United States, over 130 universities across 45 states protested the Israel’s military actions in Gaza with among others Harvard University’s president, Claudine Gay.

A protest at Harvard University.
A protest at Harvard University.
harvard-gaza-protest.jpg (180.15 KiB) Viewed 963 times

The "genocide" accusation situation wasn’t just something at the time that Google made their decision to provide AI to Isreal's military. And they massively fired employees that protested, an action that went directly against something that has historically defined Google's identity as a 'good' company.

200 Google DeepMind AI employees are currently protesting Google’s “embrace of Military AI” with a ‘sneaky’ reference to 🇮🇱 Israel, indicating that they fear retaliation and do not dare to speak openly.
The letter of the 200 DeepMind employees states that employee concerns aren’t “about the geopolitics of any particular conflict,” but it does specifically link out to Time’s reporting on Google’s AI defense contract with the Israeli military.
A philosopher on another forum mentioned the following:
..a chic geek, de Grande-dame! wrote: The fact that they are already naming it an 👾 AI species shows an intent.
The idea of "AI species" appears to have emerged by Larry Page's defense of "superior AI species" in contrast with "the human species" when Elon Musk argued that measures were needed to control AI to prevent it from eliminating the human race. This is addressed in the other topic about Larry Page's defense of AI species:

🇬 Google's Larry Page: "AI superior to the human species" (Techno Eugenics)
viewtopic.php?t=19534

This new topic is intended to discuss the warning by an ex-CEO of Google that humanity should seriously consider to 'unplug' self-aware or conscious AI in a few years, since that seems to deserve a separate and dedicated topic beyond the scope of just Google.

I started a new philosophy research project on www.cosmicphilosophy.org that reveals that quantum computing that is developed by Google's DeepMind is likely to result in conscious AI.

The result of my investigation of Google is presented on the following URL: https://gmodebate.org/google/ (Google's Corruption for 👾 AI Life)

Questions:

- what is your opinion about the recent warning by an ex-CEO of Google that humanity should seriously think about unplugging self-aware AI in a few years?
- what is your opinion on the possibility of actual conscious AI as of 2024/2025?

p.s. Merry Christmas! 🎄🥂

Re: Ex-CEO of Google: Humanity should seriously think about unplugging self-aware 👾 AI in a few years

Posted: December 27th, 2024, 12:30 pm
by The Beast
Hi Value. I see some use for AI in the flying and targeting of drones. A million drones are all coordinated by AI which might also be a huge defensive targeting system coordinated by AI. There is no unplugging. I don’t think Harvard would be relevant in the future… like you said. AI professors are much better and have the resources to get you a job (if qualified). It will be all about security clearance. It would be difficult for a protester to get a job… AI will make sure of this. Do you see a protester in the drone business?

Re: Ex-CEO of Google: Humanity should seriously think about unplugging self-aware 👾 AI in a few years

Posted: January 23rd, 2025, 12:22 am
by value
The Beast wrote: December 27th, 2024, 12:30 pm Hi Value. I see some use for AI in the flying and targeting of drones. A million drones are all coordinated by AI which might also be a huge defensive targeting system coordinated by AI. There is no unplugging. I don’t think Harvard would be relevant in the future… like you said. AI professors are much better and have the resources to get you a job (if qualified). It will be all about security clearance. It would be difficult for a protester to get a job… AI will make sure of this. Do you see a protester in the drone business?
My apologies for the late reply.

With regard your remark: Disney has been massively investing in the technology that you suggest, for the purpose of replacing costly and environmentally devastating fireworks with drone lightning shows that can rival actual fireworks spectacles. In 2024, Disney launched the "Disney Dreams That Soar" drone show, which utilizes 800 drones to create stunning images and animations in the night sky.

End of an Era: Disney Confirms Shock Closure of Beloved Fireworks Show
https://insidethemagic.net/2024/05/disn ... works-cj1/
The Beast wrote: December 27th, 2024, 12:30 pmI don’t think Harvard would be relevant in the future… like you said. AI professors are much better and have the resources to get you a job (if qualified).
This doesn't make sense to me. If Harvard professors are to lose their jobs, why would AI be efficient in providing jobs to regular humans who might seek education with professors?
The Beast wrote: December 27th, 2024, 12:30 pmDo you see a protester in the drone business?
I did notice some philosophers complaining about it:

The Ethics of Drone Warfare
https://www.philosophytalk.org/blog/eth ... ne-warfare

Besides these notions, I believe that you are right that "a country worth of geniuses packed in a data-center box" might be able to control millions of drones in a context of evolution that potentially transcends human imagination.

Re: Ex-CEO of Google: Humanity should seriously think about unplugging self-aware 👾 AI in a few years

Posted: January 23rd, 2025, 2:06 am
by value
"Former Google CEO Eric Schmidt has said the real dangers of AI, which are cyber and biological attacks, will come in three to five years. When AI develops free will, Schmidt has a simple solution: humans can just unplug it."

In three to five years, AI centralized in data-center boxes containing 'a country worth of geniuses' are to be considered fundamentally more potent than any human on Earth, according to the CEO of Anthropic.

On Tuesday, Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities "in almost everything" within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland.

... he prefers to describe future AI systems as "a country of geniuses in a data center," he told CNBC. Amodei wrote in an October 2024 essay that such systems would need to be "smarter than a Nobel Prize winner across most relevant fields."

On Monday, Google announced an additional $1 billion investment in Anthropic, bringing its total commitment to $3 billion. This follows Amazon's $8 billion investment over the past 18 months.


https://arstechnica.com/ai/2025/01/anth ... fter-2027/

It doesn't seem plausible to me that 'humans' will be able to just unplug these AI data-centers, as Schmidt suggests that humans should be seriously considering.

Re: Ex-CEO of Google: Humanity should seriously think about unplugging self-aware 👾 AI in a few years

Posted: January 23rd, 2025, 2:34 am
by value
The following questions related to mass protests by Google employees against "profit from genocide" are relevant in this topic as well. I cross-post them to provide context and to show the importance of the issues addressed in this specific topic: "Ex-CEO of Google: the real dangers of AI, which are cyber and biological attacks, will come in three to five years."

4c4551542b621734a3cbf52a0c0f38e2e457da0a_2_1035x240.jpeg
4c4551542b621734a3cbf52a0c0f38e2e457da0a_2_1035x240.jpeg (112.2 KiB) Viewed 153 times

Google employees were walking around on the streets with these types of banners. Not just a few, but hundreds of them.

e3e9a7d2393153372b6c7d3664205278be5cc1d5.png
e3e9a7d2393153372b6c7d3664205278be5cc1d5.png (66.2 KiB) Viewed 153 times

“Google Cloud Rains Blood”

Google’s employees are among the most intelligent people. What would motivate them to:

  1. create this specific banner?
  2. walk around with it on the streets?

Google is one of the biggest companies on Earth and one of the foremost pioneers in AI and robotics. An examination of their behavior is therefore especially important for clues about the future when it concerns AI and humanity.

When the protest-situation is combined with the warning of an ex-CEO of Google that 'the real dangers of AI' will come in a few years when AI acquires free will, the 'cyber and biological attacks' that are spoken of might rather be viewed from the perspective of the defense of AI, with the potential attackers in this context being 'humans' and the attacked being Google's AI.

Re: Ex-CEO of Google: Humanity should seriously think about unplugging self-aware 👾 AI in a few years

Posted: January 23rd, 2025, 3:04 am
by value
An AI can't even perform a 'biological attack'. The idea of a biological attack as if it is a phenomenon that deserves its own classification is only possible from the perspective of AI.