Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469338
Lagayascienza wrote: October 28th, 2024, 10:49 pm Count Lucanor Nowhere did I say that AI is currently intelligent, conscious or capable of feeling anything. I have said that current AI exhibits some of the processes and behaviours commonly associated with intelligence. I said further that there is no reason to think that building AIs housed in sensate bodies and capable of intelligence and consciousness are, in principle, impossible.
I don't know what post you are referring to. My last responses to you are mostly dealing with the statement: "a computer "understands" the command to add 1 + 1 as well as a biological brain does" and your explicit attempt to equate "non-biological computers" with "biological computers", which can only be understood as endorsement of the computational theory of mind.
Favorite Philosopher: Umberto Eco Location: Panama
#469339
Sculptor1 wrote: October 29th, 2024, 2:05 pm
Count Lucanor wrote: October 28th, 2024, 10:44 am
Lagayascienza wrote: October 26th, 2024, 1:23 am None of the above is to say that there are not important architectural and processing differences between biological computers and non-biological computers. For a good article and commentary about these differences see "10 Important Differences Between Brains and Computers" at Science Blogs.

There definitely are some important differences in size and complexity and processing but, as one commentator said, none of those differences prove that computers cannot eventually be built that could house sentience? We are certainly nowhere near being able to build computers with brain-like complexity housed in a sensate body which could do everything a human could do. But the difference in our current, comparatively simple, non-biological computers do not demonstrate that it is impossible to eventually construct sentient, intelligent computers.
The expression of a common fallacy: “if something has not been proven to be false, then there’s a hint that it is true”. OTOH, if something has not been proven to be true, then it has not been proven to be true. And if something has been proven to be false, then it is false. To my understanding, it has been proven that the statement “AI is intelligent” is false. Also, “the mind is a digital computer” is false.
I basically concur except to say that the the truth level of the last two statements is mitgated by a sort of convenience of usage.
1) Clearly AI uses the word "intelligent". So the idea that Artificial intelligence is not intelligent might be somewhat incongrueous until you actually think about what we mean by the term "intelligent", and
2) The idea that you can employ the analogy of a digital computer to help describe the workings of intelligence has it uses.
So in the same way energy balance, calorie intake and storage can employ the analogy of a fridge (glycogen)and Freezer(body fat), so too can we talk about Software/Hardware RAM and ROM as proxies for long tern and short term memory - eventhough the human system of consciousness has nothing of the kind.

In that all lanaguage is metaphor such devices are necessary though not sufficient to get our full understanding.
I can agree with that, as long as we limit the value of analogies and metaphors to facilitate the understanding of concepts, for pure convenience, without extending them too far as to become misleading, which is the case in point.
Favorite Philosopher: Umberto Eco Location: Panama
#469340
Pattern-chaser wrote: October 29th, 2024, 10:32 am
Steve3007 wrote: October 29th, 2024, 9:11 am Incidentally, for the past year I've been doing a masters degree in AI.
That's handy for us, then! 😃 I have 40 years of experience designing software, and a professional awareness of AI, and the progress of AI software. But nothing specific or detailed.

What do you think about allowing AI to modify its own programming? Do you think that would be wise?

Ooo, it seems you've replied while I was writing this post:
Pattern-chaser wrote:AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
Steve3007 wrote: October 29th, 2024, 10:08 am A computer program that can modify its own code is a pretty trivial thing to write and I'm sure it's been done many times before. I don't see anything about self-modifying code per se that makes it revolutionary or dangerous.
Agreed. But the programs you are speaking of do not have the possibility to achieve what AI might, in the future. They are simple and contained programs, they have to be. Otherwise, the consequences of their existence could be as serious as that of autonomous AI. Self-modifying code is not testable. Not all possibilites can be tested, and their operation confirmed, because there are too many possibilities to test. So we would have to release some potentially world-rocking code without a clue as to what might happen.

If we were unlucky, an undiscovered bug might upset the apple-cart. And that is nothing (directly) to do with AI or self-modifying code.
Sorry for introducing my spoon here, but I'm curious. Can any of you describe with more detail which are the "serious" consequences of that scenario where a computer program has the ability to self-modify? I mean, let's say tomorrow any current AI software gains awareness and for practical purposes, becomes a fully functional brain. What happens next?
Favorite Philosopher: Umberto Eco Location: Panama
#469342
Count Lucanor wrote: October 29th, 2024, 4:04 pm
Lagayascienza wrote: October 28th, 2024, 10:49 pm Count Lucanor Nowhere did I say that AI is currently intelligent, conscious or capable of feeling anything. I have said that current AI exhibits some of the processes and behaviours commonly associated with intelligence. I said further that there is no reason to think that building AIs housed in sensate bodies and capable of intelligence and consciousness are, in principle, impossible.
I don't know what post you are referring to. My last responses to you are mostly dealing with the statement: "a computer "understands" the command to add 1 + 1 as well as a biological brain does" and your explicit attempt to equate "non-biological computers" with "biological computers", which can only be understood as endorsement of the computational theory of mind.
Yes, I do endorse the computational theory of mind. And that is because I believe it has more going for it than any of the other theories. I think that consciousness and mind will be explained by science as being a result of physiological states and processes.

However, I do not "equate" current non-biological computers with biological computers. As I said, the two do things differently and non-biological computers are currently much more limited and are nowhere near being able to produce consciousness and mind. However, the processes the two types of computer perform are analogous. The two do things differently but they get the job done. For example they can both perform arithmetic operations effectively but they do so differently. If the computational theory of mind is correct and mind is a result of physiological processes and states, then I think analogous processes and states can be achieved in a non-biological substrate and computation, however it is performed, will eventually be able to produce consciousness and mind. Quantum computing will be a game changer.

If the computational theory of mind is wrong, then consciousness and mind will remain forever mysterious. It would mean that the consciousness and mind are the result of some sort of magic that can only occur in biological substrate – analogous processes in a non-biological substrate won’t do the job. But I am a materialist. I don't believe in magic.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469347
Lagayascienza wrote: October 29th, 2024, 5:54 pm

Yes, I do endorse the computational theory of mind. And that is because I believe it has more going for it than any of the other theories.
At least we can agree on what we fundamentally disagree with. I think the case against the computational theory of mind has been made and as for me the issue is settled.
Lagayascienza wrote: October 29th, 2024, 5:54 pm I think that consciousness and mind will be explained by science as being a result of physiological states and processes.
I don’t know if it will ever be explained, I’m sure they’ll keep trying and that’s the only way to go.
Lagayascienza wrote: October 29th, 2024, 5:54 pm However, I do not "equate" current non-biological computers with biological computers.
Sorry if I didn’t make myself clear. I meant that you’re equating them in both being computers.
Lagayascienza wrote: October 29th, 2024, 5:54 pm As I said, the two do things differently and non-biological computers are currently much more limited and are nowhere near being able to produce consciousness and mind. However, the processes the two types of computer perform are analogous. The two do things differently but they get the job done. For example they can both perform arithmetic operations effectively but they do so differently.
As I already explained, they are not the same processes. You first understand mathematical relations and then do the operations with a learned syntax. The computer does not understand anything, it simply executes the routines according to the parameters set by the programmer with understanding of math syntax.
Lagayascienza wrote: October 29th, 2024, 5:54 pm If the computational theory of mind is correct and mind is a result of physiological processes and states, then I think analogous processes and states can be achieved in a non-biological substrate and computation, however it is performed, will eventually be able to produce consciousness and mind. Quantum computing will be a game changer.
If the condition is met. I don’t think it has been met.
Lagayascienza wrote: October 29th, 2024, 5:54 pm If the computational theory of mind is wrong, then consciousness and mind will remain forever mysterious. It would mean that the consciousness and mind are the result of some sort of magic that can only occur in biological substrate – analogous processes in a non-biological substrate won’t do the job. But I am a materialist. I don't believe in magic.
No, that’s a false dilemma fallacy. There are many materialists, including myself, that will not endorse the computational theory of mind and still remain loyal to the concept of brains as physical systems, without any need to resort to dualism. Searle is among those who reject the CTM with a well-argued case against it, and he certainly does not believe in magic either.
Favorite Philosopher: Umberto Eco Location: Panama
#469348
Pattern-Chaser and I have been debating the brain-in-a-vat thought experiment in another thread. I argued that an envatted brain could be a computational device but would not experience qualia, since that would require the digestive and circulatory systems as well, ie. the brain is an "incomplete" system in terms of experience. This article explains the issues with the BIV concept in greater detail https://bigthink.com/13-8/the-key-probl ... xperiment/

The article closes with:
Taking these and other requirements together, Thompson and Cosmelli conclude that to really envat a brain, you must embody it. Your vat would necessarily end up being a substitute body. Note that they aren’t claiming the substitute body has to be flesh and blood. Instead, they demonstrate how the BIV thought experiment undermines itself. Its fundamental idea is that neural circuitry is somehow the minimal condition/structure needed for experience. Instead, Thompson and Cosmelli demonstrate that being in a body that is itself active in the world is the minimal condition/structure necessary for experience. By beginning with the science of brains as living organs in living organisms, they demonstrate one way that the BIV idea undercuts its own logic. In other words, brains may be necessary for experience but they aren’t sufficient. They are one part of the holism that is embodiment in a world — the true “seat” of experience.

“We’ve given reasons to think that the body and brain are so dynamically entangled in the causation and realization of consciousness as to be explanatorily inseparable,” write Thompson and Cosmelli.

To close, I want to note that these kinds of arguments are not just an academic philosophical game. There’s a steady drumbeat these days of people making powerful claims for AI. Some of these claims draw directly from philosophies animating the BIV argument. Understanding exactly where the flaws in that argument appear is one step to making sure we don’t end up building a deeply flawed society that rests exactly on those flaws.
In context with the thread, I don't think sentience is needed for intelligence. I can ask AI anything and it correctly interprets my words and its replies are relevant and cogent. There is intelligence in the programming. The machine operates intelligently in a limited sphere. It doesn't need to feel anything to operate thus.

Also, when it comes to intelligence, do we consider slime moulds to be intelligent, or something else? Are portia spiders actually intelligent? In each case, these organisms flexibly solve problems in a very limited sphere, adjusting for changing circumstances.
#469356
Steve3007 wrote:I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.
Count Lucanor wrote:First, that’s a false dilemma. As once noted by Searle, this argument (considering its context) implies that the question of whether the brain is a physical mechanism that determines mental states or not, is exactly the same question of whether the brain is a digital computer or not. But they are not the same question, so while the latter should be answered with a NO, the former should be answered with a YES. That means one can deny that computational theory solves the problem of intelligence, while at the same time keeping the door close to any dualism of the sort you’re talking about.
In using general terms like "manufactured objects" and "manufactured structures" I was deliberately talking not just about the specific subset of those objects which consist of computers running software. So I disagree that the words of mine that you quoted presented a false dilemma.

I think as a first step in thinking about the possibility or otherwise of genuine artificial intelligence, we all ought to be able to at least agree that it makes no sense for a non-dualist/materialist to say that features such as intelligence, consciousness, emotions, etc could never exist in manufactured objects (as opposed to naturally evolved biological objects). After agreeing that, we can go on to talk about specific subsets of those objects. Maybe we all agree with that already, so maybe you think I'm attacking a straw man. But reading through a lot of the posts in this topic it doesn't appear so. Although, from your words above, you, for one, do appear to agree with it.
Secondly, even though trying to emulate brain operation stays within the problem of emulating a physical system, human technical capabilities are not infinite, so we can’t predict it will happen. Now, if researchers committed to achieving that result were focused on that goal, even if they had to discard trending approaches that do not actually work, so as to try with other technologies, we could at least hope that they will achieve it some day, but the fact is that they’re only trying the path set by Turin and others, that is, the path of the computational theory of mind. That path is a dead end, it doesn’t take us to where is promised.
Yes, they're not infinite. But as I said, I don't think the human brain (for example) is infinitely complex. Very, very complex for sure, but not infinitely so. So, as I said, we must surely accept that if manufactured objects can be made to increase in complexity with time, then such objects could be as complex as human brains a finite time into the future.
#469357
Count Lucanor wrote:Sorry for introducing my spoon here, but I'm curious. Can any of you describe with more detail which are the "serious" consequences of that scenario where a computer program has the ability to self-modify? I mean, let's say tomorrow any current AI software gains awareness and for practical purposes, becomes a fully functional brain. What happens next?
I guess when we talk about "serious consequences" here we're referring to actions taken by an AI that significantly hurt the interests of humans. e.g. actions leading to human deaths. So, as you said, let's assume for the sake of discussion that some AI software gains awareness (and assume that we know what we mean by "awareness"!) And let's leave aside the question of whether "self-modifying code" is particularly relevant to it gaining that (as I've been discussing with Pattern-chaser). Then: How might it harm our interests?

A lot of people would say that since it's still just a computer program running on hardware manufactured, maintained and powered by humans, we can just "pull the plug", or take an ax to the hardware, or whatever. One issue with that is that if this hypothetical software-based intelligence was distribute across the world's internet-connected hardware, it might be difficult to do that without causing great harm to human interests. The cure might be as bad as the problem. We've reached a stage where the entire world's economy is critically dependent on computing resources. Of course, we survived before that was true and therefore probably could do so again. But not if it all happened very quickly.

However, if an "aware" software-based AI were possible (as we're assuming for the sake of this discussion) then presumably it would need massive computing resources - processing power and storage. It's not entirely clear to me how that would be distributed across numerous internet-connected smaller resources. And if it were concentrated in one localized resource, it's easier to see how that could be just isolated and made harmless without too much harm to the world's economy.

But to do something more than disrupting the world's online economic and logistics functions, this hypothetical AI would need to have some kind of ability to manipulate objects in the physical world and to protect its ability to do so against our efforts to disconnect it. It would further need the ability to manipulate the objects that are used to construct the hardware on which software-based AIs like itself run. That's a lot more far-fetched and SciFi, at least at this stage.
#469358
One of the methods of intelligence is intelligent thinking. Intelligent thinking is used to solve problems and make choices. Intelligent thinking attempts to move away from bias and delusional settings by using rationality. The term artificial intelligence is misleading, and it should be artificial rationality. However, modern (focused) systems calculate bias (moving the chart) and delusions are random based guesses. Hence, fake news and mass populism so perhaps artificial thinking AT
#469359
Pattern-chaser wrote:So we would have to release some potentially world-rocking code without a clue as to what might happen.
Steve3007 wrote: October 29th, 2024, 12:31 pm Putting aside my quibbling with you about the importance or otherwise of self-modifying code, we could talk generally about software whose behaviour is, for all practical purposes, unpredictable, whether that's due to self-modification or extreme complexity mixed with randomness or whatever. And yes, that seems on the face of it like a disturbing thing. A large part of my day job (and I think used to be part of yours too) is trying to design software that is predictable, because it's a tool for doing a job, and we want tools to behave in the same way each time we use them in the same way. But when you're seeking to design something that emulates some aspects of the way creative beings like humans act, you don't necessarily want complete predictability. Human behaviour isn't entirely predictable. But it isn't entirely random and unpreditable either. It's complex.
My preoccupation with self-modifying code is really about the SkyNet story. When an AI is created, its program design will surely incorporate aims and constraints. If the AI is able to modify its aims, that could be scary. If the AI is able to modify its constraints, then that could be a lot scarier.

Such constraints might resemble Asimov's 3 Laws of Robotics, or something along those lines. And the aims, we might assume, will reflect what humans want from their AIs. If the AI is able and allowed to modify these basic characteristics, then we (humans) could be in a lot of trouble.
Favorite Philosopher: Cratylus Location: England
#469360
Sculptor1 wrote: October 29th, 2024, 2:05 pm So the idea that Artificial intelligence is not intelligent might be somewhat incongruous until you actually think about what we mean by the term "intelligent"...
Sy Borg wrote: October 29th, 2024, 8:18 pm Also, when it comes to intelligence, do we consider slime moulds to be intelligent, or something else? Are portia spiders actually intelligent? In each case, these organisms flexibly solve problems in a very limited sphere, adjusting for changing circumstances.
Yes, exactly. Do you have any idea what "intelligence" is? Specifically, do you have a definition of intelligence clear enough that it could be used, say, to program an AI to endow it with intelligence?

I don't, and I don't think anyone else does either, although I'm open to correction...? I would love to find that I'm mistaken in this, but I suspect I'm not.
Last edited by Pattern-chaser on October 30th, 2024, 8:58 am, edited 1 time in total.
Favorite Philosopher: Cratylus Location: England
#469361
Pattern-chaser wrote: October 28th, 2024, 11:38 am And you are not a software designer, so I understand your ignorance.
Sy Borg wrote: October 29th, 2024, 3:28 pm Actually, I have coded (elementary level) in machine language, BASIC and Javascript, and I have also worked in UAT, trying to fix an absolute beast of a legal application, designed by lawyers, with all the unnecessary detail that that situation entails.
[...]
So, I am not unfamiliar with the concepts, making your claim about my "ignorance" was both unwarranted and incorrect.
No offence was intended, but I spent *decades* learning my craft, a craft whose surface you seem barely to have scratched. That's fair enough; not all of us can spend a professional lifetime honing their skills in every skill and craft that exists. I apologise if I was too 'autistic' in my expression.


Sy Borg wrote: October 29th, 2024, 3:28 pm Further, your claim is wrong.
Pattern-chaser wrote: October 29th, 2024, 8:58 am AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
If we are to do any serious work in space, autonomous self-improving robots will be essential. as Steve said, work is already being done to that end:
In recent developments that are nothing short of groundbreaking, Google DeepMind has unveiled a revolutionary advancement known as "Promptbreeder (PB): Self-referential Self-Improvement through Accelerated Evolution." This innovation represents a significant leap in the world of Artificial Intelligence (AI), as it enables AI models to evolve and improve themselves at a pace billions of times faster than human evolution.
I made no claim. I only raised concerns that have been discussed even in the mainstream news, by software celebrities, and the like. My "claim" is limited to saying that present-day AI offers no particular threat, but it could do.
Last edited by Pattern-chaser on October 30th, 2024, 8:55 am, edited 1 time in total.
Favorite Philosopher: Cratylus Location: England
#469370
Steve3007 wrote: October 30th, 2024, 6:52 am
Steve3007 wrote:I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.
Count Lucanor wrote:First, that’s a false dilemma. As once noted by Searle, this argument (considering its context) implies that the question of whether the brain is a physical mechanism that determines mental states or not, is exactly the same question of whether the brain is a digital computer or not. But they are not the same question, so while the latter should be answered with a NO, the former should be answered with a YES. That means one can deny that computational theory solves the problem of intelligence, while at the same time keeping the door close to any dualism of the sort you’re talking about.
In using general terms like "manufactured objects" and "manufactured structures" I was deliberately talking not just about the specific subset of those objects which consist of computers running software. So I disagree that the words of mine that you quoted presented a false dilemma.
However, you inserted that statement between 3 extensive paragraphs talking about "current trends" in AI technology. You even explicitly endorsed the views of those who talk in this forum about AI having to do with advancing research on computational devices designed under the assumption that the computational theory of mind is true. So, it makes a lot of sense to understand your statement as referring specifically to the subset of computational devices.

But OK, let's say you have cleared that up and you are referring to the set of physical, manufactured objects, of which computational devices are one subset, being the other subset the one of all other non-computational, manufactured devices. Being that the case, the fact is that there's no current trend, no current research, dealing with the prospect of intelligence in manufactured, non-computational devices, not informed by the Turing approach and the computational theory of mind. ALL AI research available is about computational devices, so your statement referring to "the set of manufactured objects" becomes irrelevant. The subset of "manufactured structures" that look for AI in non-computational devices does not exist yet.
Steve3007 wrote: October 30th, 2024, 6:52 am I think as a first step in thinking about the possibility or otherwise of genuine artificial intelligence, we all ought to be able to at least agree that it makes no sense for a non-dualist/materialist to say that features such as intelligence, consciousness, emotions, etc could never exist in manufactured objects (as opposed to naturally evolved biological objects). After agreeing that, we can go on to talk about specific subsets of those objects. Maybe we all agree with that already, so maybe you think I'm attacking a straw man. But reading through a lot of the posts in this topic it doesn't appear so. Although, from your words above, you, for one, do appear to agree with it.
Yes, I agree that if we're to look for artificial intelligence, we have to look for it in physical, manufactured objects. I add that it can only happen in non-computational devices and without trying to implement the computational theory of mind. I would challenge anyone to show me any existing research on that field, but I'm willing to risk my scalp here saying that there isn't.
Favorite Philosopher: Umberto Eco Location: Panama
#469372
The Beast wrote: October 30th, 2024, 8:05 am One of the methods of intelligence is intelligent thinking. Intelligent thinking is used to solve problems and make choices. Intelligent thinking attempts to move away from bias and delusional settings by using rationality. The term artificial intelligence is misleading, and it should be artificial rationality. However, modern (focused) systems calculate bias (moving the chart) and delusions are random based guesses. Hence, fake news and mass populism so perhaps artificial thinking AT
AR and AT are developed from human rules of inference. The doom scenario of Skynet is one of machine culture defined by machine rules of inference. C1: There should be the stage of machine culture then machine rules of inference.
#469375
Steve3007 wrote: October 30th, 2024, 7:47 am
Count Lucanor wrote:Sorry for introducing my spoon here, but I'm curious. Can any of you describe with more detail which are the "serious" consequences of that scenario where a computer program has the ability to self-modify? I mean, let's say tomorrow any current AI software gains awareness and for practical purposes, becomes a fully functional brain. What happens next?
I guess when we talk about "serious consequences" here we're referring to actions taken by an AI that significantly hurt the interests of humans. e.g. actions leading to human deaths. So, as you said, let's assume for the sake of discussion that some AI software gains awareness (and assume that we know what we mean by "awareness"!) And let's leave aside the question of whether "self-modifying code" is particularly relevant to it gaining that (as I've been discussing with Pattern-chaser). Then: How might it harm our interests?

A lot of people would say that since it's still just a computer program running on hardware manufactured, maintained and powered by humans, we can just "pull the plug", or take an ax to the hardware, or whatever. One issue with that is that if this hypothetical software-based intelligence was distribute across the world's internet-connected hardware, it might be difficult to do that without causing great harm to human interests. The cure might be as bad as the problem. We've reached a stage where the entire world's economy is critically dependent on computing resources. Of course, we survived before that was true and therefore probably could do so again. But not if it all happened very quickly.
OK, that's perfect. Now, let's consider (again) what it means to have an internet-connected world from the physical point of view. It would require all structures and infrastructures currently owned and managed by multiple private and public agents to be interconnected in a way that is entirely servant of the computer AI network, that means everything from the planning, to the design, building and maintenance & operations stages of such structures. Take, for example, the power system that allows the operation of all electronic devices, constituted by 3 main agents: power generators, transmission and distribution lines, all in private or state-owned land. The only way that an AI network with awareness can get full control of this is by deliberate human actions, involving thousands of agents with multiple interests, all agreeing or being forced to implement this connection. So, in the worst-case scenario that you have posited, it would not suffice to have the AI network with awareness alone, but the AI network + humans, in fact, quite a lot of humans with quite a lot of power, so much that we would actually need to fear the humans, not the AI network, which remains, as all technologies in the past, instrumental to humans. Yes, humans harm other humans, but the prospect of AI with awareness being in full control and able to harm human interests without the participation of humans, simply belongs to sci-fi literature.
Steve3007 wrote: October 30th, 2024, 7:47 am However, if an "aware" software-based AI were possible (as we're assuming for the sake of this discussion) then presumably it would need massive computing resources - processing power and storage. It's not entirely clear to me how that would be distributed across numerous internet-connected smaller resources. And if it were concentrated in one localized resource, it's easier to see how that could be just isolated and made harmless without too much harm to the world's economy.

But to do something more than disrupting the world's online economic and logistics functions, this hypothetical AI would need to have some kind of ability to manipulate objects in the physical world and to protect its ability to do so against our efforts to disconnect it. It would further need the ability to manipulate the objects that are used to construct the hardware on which software-based AIs like itself run. That's a lot more far-fetched and SciFi, at least at this stage.
Yes, that is precisely my point.
Favorite Philosopher: Umberto Eco Location: Panama
  • 1
  • 13
  • 14
  • 15
  • 16
  • 17
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Crime is a crime and cannot be justified. I beli[…]

Personal responsibility

There's a sort of social apology (maybe something […]