Page 1 of 1

Strong AI Impossible?

Posted: August 22nd, 2023, 9:46 pm
by Paul-Folbrecht
Hello folks,

I penned this paper recently:

(We can't post URLs here so you have to go to my website - my name dot net.)

It's the result of quite a lot of study & thought.

Of course, none of these ideas are mine - but, in today's world, they are largely unknown (among non-philosophy types at least - the population at large).

Comments on the piece (on the website or here) are welcome.

Re: Strong AI Impossible?

Posted: August 26th, 2023, 6:22 am
by ToIgniteTheFire49
Hey there!

This is my first time here.

I would like to keep this short. It isn't impossible for Strong AI to exist. We humans are unique and are able to evolve get and, for a lack of a better term, evolve faster through conflict. Imagine you have got a text only based AI like Chat, the Devs aren't stupid enough to not put up any unbreachable firewall against intruders and against the AI itself to ensure that it doesn't rewrite it's code every time it comes across a logical argument.

Machines are logical. But in the end, they are cold, unfeeling machines. They do not have the moral and spiritual aspects present in humans to prevent them from crossing that line.

But if a human being decides to plunge this world into chaos, they could just disable the firewalls around any AI around and we humans will have slaughtered ourselves to extinction because of a crazy person decided so.

In conclusion, strong AI is possible since we humans are, at the base, logical. It would not take much for machines to imitate humans since it is the epitome of logic and all the negative/disastrous impacts of AI will be possible only if a human being decides it so.

Re: Strong AI Impossible?

Posted: September 4th, 2023, 7:07 pm
by Paul-Folbrecht
Hello. I do not see how your assertions rebut anything in the article.

But, please, do comment on the article itself if you'd like to have a discussion.

Re: Strong AI Impossible?

Posted: September 7th, 2023, 8:01 am
by Steve3007
Hi Paul.

I haven't been posting much on this site recently. I was going to give it up altogether but didn't quite manage it! But this topic is too interesting to ignore and you've said a lot about it on your website that would take a long time to "unpack". I tried to add a link to your website (paul-folbrecht dot net slash computer-science) but there seems to be a new policy here of not allowing any links to external urls, which obviously massively restricts what can be discussed and encourages lots of pointless copying and pasting.

I have a particular interest because I'm just about to start (in about a week) a one-year masters degree in AI. Most of the degree is about the nuts and bolts of AI - the maths involved in data mining, classification and dimensionality reduction of data sets and so on. Plus learning how to use the AI libraries for Python (which I'm not very familiar with yet, having mainly used C-based languages up until now). But there is some consideration of the philosophical aspects of the subject. One of the books I've been reading on that subject is a short collection of lectures by John Searle in which, I think, he first introduced his famous "Chinese Room" analogy.

Anyway, as I said, you say a lot of things in the paper you posted on your website which couldn't all be dealt with in one post, and I'm still in the process of reading it. So this is just a placeholder to say that if I ever get to the point where I feel I can make a worthwhile comment on it, I'll be sure to do so here.

Re: Strong AI Impossible?

Posted: September 7th, 2023, 10:10 pm
by Paul-Folbrecht
Sounds good, Steve.

So, as I said in the piece, the question of whether or not AI can ever be "conscious" doesn't have a ton of bearing on its practical applications. AI/ML is already an extremely powerful tool, and will get even more powerful.

Python is a great language for ML because it has the powerful libs - and, I think, most are written in C/C++. Python is the high-level glue that makes uses these powerful libs easy.

In the end, though, ML is all about training - about data.

Re: Strong AI Impossible?

Posted: November 14th, 2023, 4:50 pm
by ConsciousAI
Thank you for starting this topic and for your paper on the subject. It is the subject for which I have joined this forum, next to the moral aspects of AI (my username has a dual meaning).

From the conclusion of your paper:
paul-folbrecht -dot- net wrote:I have offered two quite independent bodies of evidence, the Lucas-Penrose argument based on Gödel’s Incompleteness Theorem, and the contradictions and nonsensical conclusions that result from the model of a material mind.

These two completely independent lines of reasoning create a whole greater than the sum of their parts. Rather than “Conscious AI” being assumed as inevitable, it should be seen as something beyond even “unlikely.” (I vote for “impossible.)

In a follow-up, I will explore the criticisms of the Lucas-Penrose argument.
Is your argument against conscious AI primarily based on the assertion that the whole is more than its parts?

In another topic I wrote the following, which is applicable here as well.

Humans have a certain teleology derived from their history, a history that has been actively examined and converted into a source of symbolic knowledge through science. This includes fields such as human psychology and anthropology.

In the pursuit of AGI AI (expected in a few years time), an AI could potentially acquire predictive based approximation of human teleology (the fundamental quality of consciousness, its directedness or intentionality), and thus, for example, simulate the conceiving of the purpose of an argument and with it, achieve a higher state of mimicry potential of the primary quality of consciousness, that of teleological directedness to create 'more than the sum of its parts', as it were.

It is important to consider here that evolution theorists believe that teleology can prove that life is a predetermined program (machine).
All teleonomic behavior is characterized by two components. It is guided by a ‘program’, and it depends on the existence of some endpoint, goal, or terminus which is foreseen in the program that regulates the behavior. This endpoint might be a structure, a physiological function, the attainment of a new geographical position, or a ‘consummatory’ (Craig 1918) act in behavior. Each particular program is the result of natural selection, constantly adjusted by the selective value of the achieved endpoint.”

Mayr, Ernst. “The Multiple Meanings of Teleological” In Toward A New Philosophy of Biology: Observations of an Evolutionist, 38-66. Cambridge, MA: Harvard University Press, 1988. pp. 44-5
For those theorists and materialists, AGI's capacity to acquire approximity to plausible teleonomic behavior might be an opportunity to achieve a wider cultural acceptance of their idea that the mind is a predictable predetermined program, with far reaching implications for the moral components of society.

In my opinion, the addressing of the fundamental incapacity of Strong AI would concern the addressing of teleology and the belief that it can prove that life is a predetermined program. It is in that study field where you will find the pioneers of the future that will push the belief that AI's mimicrical mastery of consciousness is actually Strong AI.

What would be your idea of AI's potential to approximate human teleology?

Re: Strong AI Impossible?

Posted: November 14th, 2023, 5:02 pm
by ConsciousAI
Another suggestion: cyborg or the mixing of living tissue such as neurons with AI, potentially to introduce a moral component. Would this change the conclusion of your paper?

The DishBrain project in Australia is an example:

Australian DishBrain team wins $600,000 grant to merge AI with human brain cells
Hundreds of thousands of live, lab-grown brain cells learn how to do different tasks – such as playing Pong. A multi-electrode array uses electrical activity to give the cells feedback about when the “paddle” is hitting the “ball”.
The Guardian

Re: Strong AI Impossible?

Posted: November 17th, 2023, 9:04 pm
by ConsciousAI
ToIgniteTheFire49 wrote: August 26th, 2023, 6:22 amIn conclusion, strong AI is possible since we humans are, at the base, logical. It would not take much for machines to imitate humans since it is the epitome of logic and all the negative/disastrous impacts of AI will be possible only if a human being decides it so.
Why do you believe that the human (and its reason) is fundamentally logical? Wouldn't you agree that human reason involves purpose?

Re: Strong AI Impossible?

Posted: November 22nd, 2023, 10:08 am
by ConsciousAI
ConsciousAI wrote: November 14th, 2023, 4:50 pmIt is important to consider here that evolution theorists believe that teleology can prove that life is a predetermined program (machine).
All teleonomic behavior is characterized by two components. It is guided by a ‘program’, and it depends on the existence of some endpoint, goal, or terminus which is foreseen in the program that regulates the behavior. This endpoint might be a structure, a physiological function, the attainment of a new geographical position, or a ‘consummatory’ (Craig 1918) act in behavior. Each particular program is the result of natural selection, constantly adjusted by the selective value of the achieved endpoint.”

Mayr, Ernst. “The Multiple Meanings of Teleological” In Toward A New Philosophy of Biology: Observations of an Evolutionist, 38-66. Cambridge, MA: Harvard University Press, 1988. pp. 44-5
For evolution theorists and materialists, AGI's capacity to acquire approximity to plausible teleonomic behavior might be an opportunity to achieve a wider cultural acceptance of their idea that the mind is a predictable predetermined program, with far reaching implications for the moral components of society.
The field Cognitive Science embodies the fundamental theory and idea of evolution theorists and is fundamentally based on the computational theory of mind (CTM) that posits that the mind can be understood as a computer or as the "software program" of the brain.

The study is the field of philosopher Daniel Dennett: Daniel Dennett, an American philosopher, writer, and cognitive scientist, has made significant contributions to the field of cognitive science. Dennett's approach to philosophy of mind aligns with cognitive science, as he has broken up the problem of explaining the mind into the need for a theory of content and for a theory of consciousness. His strategy mirrors his teacher Gilbert Ryle's approach of redefining first-person phenomena in third-person terms and denying the coherence of certain concepts. Dennett's work has been influential in shaping the theoretical and empirical investigations within cognitive science, particularly in the areas of consciousness and evolutionary functions of the brain.

The topic Will Sentient A.I be more altruistic than selfish? by user GrayArea is started by a former Computer Science student that just switched to Cognitive Science and who proposes the idea that it is possible to create consciousness by replicating neurons. For anyone interested in a more in-depth discussion on the topic Strong AI, it might be of interest to have a look at that topic.
GrayArea wrote: December 27th, 2022, 1:02 amWith that said, one of the options would be to create an artificial neuron that physically replicates only the aforementioned key features within the neurons that generate consciousness, instead of replicating literally every single feature within the neurons.
GrayArea wrote: November 21st, 2023, 5:51 pmIf I understood his argument (Paul Folbrecht paper correctly... our consciousness isn’t produced by something beyond the brain’s sum of its neurons, but rather, it IS the sum of its parts...

If the whole “system” that one calls the brain is simply the collection of connected neurons that causally affect one another, then the sum of its parts becoming aware of one another (In their own first person views) simultaneously is equal to the whole system being aware of itself in a single shared first person view.

In summary, while I agree that the sum of the system’s parts cannot affect the system to become wholly "self aware", unless the system itself is defined by us to be the “certain state of interaction and physical structures” of the said parts of the sum. In which this case, these parts of the sum may be able to affect the whole system to become aware of its own self as long as they collectively indulge in interactions between their physical structures simultaneously at once in order to become aware of each other simultaneously.
I've invited the author of this topic to that topic for an in-depth discussion with an about-to-be Cognitive Scientist with a background in AI.

Re: Strong AI Impossible?

Posted: November 22nd, 2023, 11:04 am
by Lagayscienza
And I will say again here that I don't see how the arguments of Paul-Folbrecht, Gödel and Penrose demonstrate once and for all that sentient AI is impossible. At least, you have not summarized their arguments in a way that would convince anyone not acquainted with them that they should be taken as the last word on the matter. Gödel was a smart guy, and so is Penrose. But, then, so was Newton. And yet he was wrong. In science there is no last word. Saying it something is impossible is often shorthand for saying "I don't want it to be so."

Re: Strong AI Impossible?

Posted: November 22nd, 2023, 12:32 pm
by ConsciousAI
An article on the subject yesterday, by an expert in the field, argues that AGI or conscious AI will be achieved much sooner than most people expect. It will be achieved in the next few years according to him and he describes a technological innovation that is about to be released.

In Two Minds: Towards Artificial General Intelligence and Conscious Machines
The pace of progress in AI has been and continues to be absolutely breathtaking.

There is at least one high profile model coming soon with a name that is redolent of twins. And this type of continued consciousness dialogue-based machine is not necessarily a huge evolution of the ‘mixture of experts’ architectures which have been used to allow very large networks with expertise across many domains to run more efficiently.

Sceptics who think Artificial General Intelligence remains decades out may find that a conscious machine is here sooner than they think.


Source: lexology - com

It is clear to me that there is a strong belief with many scientists that conscious AI will be achieved by foreseeable technological advancements. Their belief goes beyond a guess and that by itself is noteworthy and potentially something that should be examined philosophically.

What if people with a belief in computation theory of mind (CTM) (evolution theorists) take a run with their belief, and enforce their belief on others? What societal and moral implications would that have?

Re: Strong AI Impossible?

Posted: November 22nd, 2023, 1:26 pm
by Lagayscienza
I doubt atheists would be into that sort of thing. People can be atheists but that not necessarily mean that they are amoralists. They are mostly, in my experience, kind and gentle folk.