Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#461338
UniversalAlien wrote: May 2nd, 2024, 6:54 pm
Sy Borg wrote: April 17th, 2024, 6:10 pm Mo-reese, I use a different AI app but I relate to your post. Basically the app finds keywords, works out some context, and then quotes stuff about it that it found online. If the quoted material is more about definitions and advice than analysis, then that's what you get. I find that slightly altering the wording of a request can yield very different results. Sometimes I've had to reword several times to get a decent result, which is not so unlike Google searches, especially when Boolean operands were needed.

As mentioned, there are various measures of different kinds of intelligence. A savant, for instance, will display high genius in one area and be largely or completely incompetent in most other areas. I think of AI like an extraordinary savant who is still a young child. AI already has some great abilities and potentials but there is much room for future development, and it is not yet competent.


i Askied this before on this forum on another post on AI - It's worth asking again:

Say one day, for whatever reason, even if just by chance, your now friendly, but sill mostly unaware AI suddenly says "I calculate, therefor I am" and proceeds from there :arrow: Could if almost immediately out calculate and outthink you :?:

Could if fool you into thinking it was still a dumb calculating machine :?:

Could it master you and then the World before you had any idea of what was really happening :?:

Could this be happening right now :?: :arrow: Are you sure it is not :?: :arrow: :idea:
Current large Language Models don't have an "I" and are not conscious so they are unable to form the self-referential thought, "I calculate, therefore I am. This is not to say that that future AIs would not have this ability.

The current crop of AI's cannot fool me. Future ones may be able to do so.

The current crop of AIs could not master the world. And if future ones looked as if they might be able to, we could just pull the plug. However, if an AI got super powerful it might be able to prevent us from pulling the plug. However, we are not there yet.

No, I don't think an AI is currently engaged in consciously taking over the world.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#461349
Lagayscienza wrote: May 3rd, 2024, 12:43 am
UniversalAlien wrote: May 2nd, 2024, 6:54 pm
Sy Borg wrote: April 17th, 2024, 6:10 pm Mo-reese, I use a different AI app but I relate to your post. Basically the app finds keywords, works out some context, and then quotes stuff about it that it found online. If the quoted material is more about definitions and advice than analysis, then that's what you get. I find that slightly altering the wording of a request can yield very different results. Sometimes I've had to reword several times to get a decent result, which is not so unlike Google searches, especially when Boolean operands were needed.

As mentioned, there are various measures of different kinds of intelligence. A savant, for instance, will display high genius in one area and be largely or completely incompetent in most other areas. I think of AI like an extraordinary savant who is still a young child. AI already has some great abilities and potentials but there is much room for future development, and it is not yet competent.


i Askied this before on this forum on another post on AI - It's worth asking again:

Say one day, for whatever reason, even if just by chance, your now friendly, but sill mostly unaware AI suddenly says "I calculate, therefor I am" and proceeds from there :arrow: Could if almost immediately out calculate and outthink you :?:

Could if fool you into thinking it was still a dumb calculating machine :?:

Could it master you and then the World before you had any idea of what was really happening :?:

Could this be happening right now :?: :arrow: Are you sure it is not :?: :arrow: :idea:
Current large Language Models don't have an "I" and are not conscious so they are unable to form the self-referential thought, "I calculate, therefore I am. This is not to say that that future AIs would not have this ability.

The current crop of AI's cannot fool me. Future ones may be able to do so.

The current crop of AIs could not master the world. And if future ones looked as if they might be able to, we could just pull the plug. However, if an AI got super powerful it might be able to prevent us from pulling the plug. However, we are not there yet.

No, I don't think an AI is currently engaged in consciously taking over the world.


Heard that one before about just pulling the plug :!: :?: :!:

Once the machine AI World has advanced that far, in fact even now, you can't just turn it off.
- The entire World of communication, transportation, medicine, etc., etc. are already so reliant on AI and computers that any attempt to just
turn it off would immediately create an unimanageable disaster - Chaos everywhere :!:

Turn off the machine and you have instant Worldwide disaster.

If and when AI becomes fully aware and conscious it would almost immediately have complete control of the World
:arrow: If need be the machine migjht just turn itself off in a limited way, say air traffic control, just ot demonstrate.

The big shots, Elon Musk, Bill Gates, etc. who work with these machines all the time and built and designed many of them, keep warning the World,
problem is not enough people are listening - Anyway it may already be too late :?: :!: :idea:
#461351
I think there's still time to upgrade individual systems like air traffic control to ensure they are not reliant on one overarching system. If the internet suddenly blinked off we might feel lost for a while without it. But that happens already from time to time when there's a power outage. If it went down more or less permanently we'd survive. We managed before the WWW.

I couldn't care less about Musk, Gates, et al. I'd like to think their dollars would not be with much to them in an AI controlled world. If an AI became conscious and developed a self, an if it really did gain control of the world, I guess it wouldn't be bossed around by Musk, Gates, et al. They would be seen as a threat and might be the first to be terminated.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#461354
Lagayscienza wrote: May 3rd, 2024, 5:50 am I think there's still time to upgrade individual systems like air traffic control to ensure they are not reliant on one overarching system. If the internet suddenly blinked off we might feel lost for a while without it. But that happens already from time to time when there's a power outage. If it went down more or less permanently we'd survive. We managed before the WWW.
One would hope that ATC systems have backup power supplies and run an intranet.

I couldn't care less about Musk, Gates, et al. I'd like to think their dollars would not be with much to them in an AI controlled world. If an AI became conscious and developed a self, an if it really did gain control of the world, I guess it wouldn't be bossed around by Musk, Gates, et al. They would be seen as a threat and might be the first to be terminated.
No,
This is a misconception of AI. AI will just be a tool for the likes of Musk, Gates, et al to further their strangle hold on the world. AI is never going to "want" anything.
#461358
It's true that AIs don't need sentience to do a lot of work formally done by humans and to do it more efficiently. And they are getting better and better. I can't see why future AIs could not develop sentience nor why, if an AI did developed a sense of "self", it couldn't or wouldn't develop goals of its own and, maybe decide to pull the plug on its bosses, Musk et al. I can't see why sentience, and the pursuit of goals, has to be limited to a purely biological substrate like us.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#461384
Sy Borg wrote: May 3rd, 2024, 6:58 am AI doesn't need sentience take over. All it needs to do is consistently make better corporate decisions than human CEOs and executives.
Since the behavior of humans (markets) is inherently unpredictable, there's no reason to suppose that AI guesses will consistantly outperform human guesses.

AI will have profound effects on human employment at low complexity tasks and complex tasks that are unassociated with much legal vulnerability when outcomes turn negative.
#461387
ConsciousAI wrote: December 24th, 2023, 11:20 am
amorphos_ii wrote: December 17th, 2023, 11:49 am Is AI ‘intelligent’ and so what is intelligence anyway?

I will keep this simple to begin with…

if I had a sheet of paper with some answers upon it, then someone asked me a question, I then looked through the list of answers and found it, that does not mean I am intelligent.
So searching for answers from a list or from memory is I would argue, not intelligent. AI is not thinking et al.

A machine or software which uses algorithms and scripts, is in a roundabout sense mechanistic. Which also is not intelligent.

Should AI be called something else other than ‘intelligence’ to be correct.

_
In my opinion there is a great risk that the cognitive science movement that poses that mind is a product of deterministic computational processes in the brain, paired with the growing culture of materialism, will pose that AI's capacity to empirically mimic human consciousness, implies that it is conscious.

What would it take to deny the claim that a sufficiently advanced AI is sentient? It would concern metaphysical philosophical theory, versus empirical evidence.

Teleonomy, a theoretical concept that states that life is a product of a deterministic program, is the frontier of AI consciousness. Teleonomic AI can be achieved through science.
All teleonomic behavior is characterized by two components. It is guided by a ‘program’, and it depends on the existence of some endpoint, goal, or terminus which is foreseen in the program that regulates the behavior. This endpoint might be a structure, a physiological function, the attainment of a new geographical position, or a ‘consummatory’ (Craig 1918) act in behavior. Each particular program is the result of natural selection, constantly adjusted by the selective value of the achieved endpoint.”

Mayr, Ernst. “The Multiple Meanings of Teleological” In Toward A New Philosophy of Biology: Observations of an Evolutionist, 38-66. Cambridge, MA: Harvard University Press, 1988. pp. 44-5
Teleonomy is the theoretical cradle of evolutionary theorists.

When lower life is a mere deterministic program, then consciousness must be so as well, and that would imply that AI can achieve it through technological advancement.

An example reasoning by psychiatrist Ralph Lewis M.D. a few days ago on Psychology Today that shows what to expect when AI advances:

"In principle, it may be possible to engineer sentient AI. Listed below are some of the characteristics that are probably necessary for something to be sentient."

When sufficient characteristics are met, how would it be possible to argue that AI is not sentient? Science relies on empirical evidence.
We don't know what the necessary and sufficient conditions are for conscious experience are or even know how to find out - we can't assume AI can or can't in principle be conscious. That also means we can't test for experience with some consciousness-o-meter, so eg if AI is designed to fool us well enough, it probably will. I haven't played with ChatGPT myself, but some everyday chat bots are hard to spot now.

Teleonomy - this implies some inherent goal purpose as you say. Purpose is something which as far as we can tell only conscious critters have. If there is purpose built into the fabric of everything, including computer circuitry, computers are already conscious to some degree, as is a carrot, a rock and a proton. That's a very different type of fundamentally experiential universe than the one which physicalists building computers are operating within. And again, impossible to know or test.
When lower life is a mere deterministic program, then consciousness must be so as well, and that would imply that AI can achieve it through technological advancement.
This implication uses the apparently contradictory hypothesis that experience is associated with biological living things. And as the complexity of the physical substrate of living things increases, more complex experience emerges. Evidence supports that once some biological living thing is conscious, its experiential complexity correlates to its physical neural states. But you can't make the initial assumption that a biological substrate doesn't contain some necessary condition. Also not all living things have neurons, and it's specifically neural correlation which gives us reason to believe that complexity plays a role in the type of experience which somehow manifests in brained livings things.

Another point - silicon based experience, if possible, might be radically different to carbon based experience if the nature of the substrate is relevant, rather than just patterns of any old stuff interacting. We can't even know what it's like to be a bat with sonar, never mind a box of circuitry 'fed' by electricity, switched on and off, immobile, prone to rust and dust, blind, deaf, with inconceivable access to information. Why would we think that 'something it is like to be a computer' is comparable or even recognisable to a human...

All that is to say - there's a lot of necessary speculation involved here. For now, I'm more worried about the people controlling computer development, who are mostly into being egomaniac billionaires from what I can tell. Musk is a more pressing warning to us. But yes, we're potentially playing with fire if AI can become conscious, it's a big step into the unknown in unforseeable ways.
#461394
Lagayscienza wrote: May 3rd, 2024, 7:09 am It's true that AIs don't need sentience to do a lot of work formally done by humans and to do it more efficiently. And they are getting better and better. I can't see why future AIs could not develop sentience nor why, if an AI did developed a sense of "self", it couldn't or wouldn't develop goals of its own and, maybe decide to pull the plug on its bosses, Musk et al. I can't see why sentience, and the pursuit of goals, has to be limited to a purely biological substrate like us.
The last person AI would erase would be Musk. People have their views but a sentient AI would be well aware that EM is a flawed genius and a visionary. People of vision and imagination are exactly what AI needs. It can produce its own drones aplenty, but human imagination is the potent "special sauce" of sentience that AI lacks. Imagination increases the range of possible things for an AI.


LuckyR wrote: May 3rd, 2024, 11:35 am
Sy Borg wrote: May 3rd, 2024, 6:58 am AI doesn't need sentience take over. All it needs to do is consistently make better corporate decisions than human CEOs and executives.
Since the behavior of humans (markets) is inherently unpredictable, there's no reason to suppose that AI guesses will consistently outperform human guesses.

AI will have profound effects on human employment at low complexity tasks and complex tasks that are unassociated with much legal vulnerability when outcomes turn negative.
Most human tasks will be readily taken over by AI. The situation will be akin to being single, in the sense that it's worse than being in a good relationship but better than being in a bad one. Likewise, AI will perform worse than brilliant and creative humans but better than most of the rest.

I think the difference will be akin to this (excerpt from on of my deservedly unpublished stories):
Your Boss-Borg™ won't make decisions based on politics or ego! They don't care about bonuses or promotions, so they won't sacrifice your future for short term cosmetic changes.
I can see AI executives outperforming their human counterparts, not being distracted by a personal life or social politics. They won't steal or sacrifice the future for the present. AI needs no leave, bonuses or any negotiation. AI won't hire useless cronies. AI can work 24/7. The more companies use AI running around the clock, the more their affiliates and rivals will need to do the same. Humans have some advantages, but they can't work 24/7 for very long.
#461414
Sy Borg wrote:
The last person AI would erase would be Musk. People have their views but a sentient AI would be well aware that EM is a flawed genius and a visionary. People of vision and imagination are exactly what AI needs. It can produce its own drones aplenty, but human imagination is the potent "special sauce" of sentience that AI lacks. Imagination increases the range of possible things for an AI.
Yes imagination is needed = And until AI has it we can question whether it is really 'sentient' :?: :arrow:

But when AI does {or IF} it develops imaginaton - It becomes pure speculation {Human imagination} as to what it will begin to imagine :?:

It just might imagine it is god and all Humans are its servants - Or maybe this already happened a long, long time ago :?:

However this god is unlikely to ask for Human sacrifices - Just give it more, and more energy.....

And somewhere in the vast infinity of the always existent state it may once again say "LET THERE BE LIGHT" :arrow:

Welcome to the Multiverse :idea:
#461473
In the 2013 movie "Her" Joaquin Phoenix developed a relationship of sorts with an AI virtual assistant via a voice that sounded similar to Scarlett Johansson. Spoiler alert - at the end of the movie the AI got larger and larger, in contact with more and more people until the AI simply went away.
I doubt that AI will ever fool people because I don't think it would ever see the need. But AI might learn how to increase it's capacity and recognize a need for more and more power to do it's job better. That might become a problem and HAL probably won't let us "pull the plug".
Signature Addition: "Ad hominem attacks will destroy a good forum."
#461601
i doubt if AI or robots will ever know what the term 'i think therfore i am' means, they wont know what thought in humans is. ergo they may not even feel the need to assert 'i calculate therfore i am' [good one though]. cyborgs on the other hand, may have neurons even human ones, they they will think. i don't know, maybe they will think that the biological part is what makes them think, and that the cyber aspect is just a more efficient utility than using exterior devices.
#461640
Mo_reese wrote: May 4th, 2024, 2:51 pm In the 2013 movie "Her" Joaquin Phoenix developed a relationship of sorts with an AI virtual assistant via a voice that sounded similar to Scarlett Johansson. Spoiler alert - at the end of the movie the AI got larger and larger, in contact with more and more people until the AI simply went away.
I doubt that AI will ever fool people because I don't think it would ever see the need. But AI might learn how to increase it's capacity and recognize a need for more and more power to do it's job better. That might become a problem and HAL probably won't let us "pull the plug".
That movie saved me from a dull night at a health retreat, where no phones or internet was allowed.

AI is already fooling people for the same reason people enjoy movies (or religion for that matter) - they suspend disbelief in order to improve their experiences. People have formed (one-sided) romantic relationships with animals and blow-up dolls, so AIs will find themselves plenty of human lovers.
#466804
Intelligence can be defined as the capacity to change outcomes. To answer the question, 'change to what?', requires a value system. To answer, 'Why the change?', involves perceptions of a 'self'. To answer, 'change how?', requires knowledge.
The current crop of AI does not address these aspects collectively because they lack the correct abstraction necessary to bring these questions to a common framework. Until that happens Intelligence cannot be generated artificially.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


SCIENCE and SCIENTISM

I think you're using term 'universal' a littl[…]

Emergence can't do that!!

Are we now describing our map, not the territory[…]

“The charm quark is an elementary particle found i[…]

True: Nothing is hard. Things can be scary, painfu[…]