Thank you for starting this topic and for your paper on the subject. It is the subject for which I have joined this forum, next to the moral aspects of AI (my username has a dual meaning).
From the conclusion of your paper:
paul-folbrecht -dot- net wrote:I have offered two quite independent bodies of evidence, the Lucas-Penrose argument based on Gödel’s Incompleteness Theorem, and the contradictions and nonsensical conclusions that result from the model of a material mind.
These two completely independent lines of reasoning create a whole greater than the sum of their parts. Rather than “Conscious AI” being assumed as inevitable, it should be seen as something beyond even “unlikely.” (I vote for “impossible.)
In a follow-up, I will explore the criticisms of the Lucas-Penrose argument.
Is your argument against conscious AI primarily based on the assertion that the whole is more than its parts?
In another topic I wrote the following, which is applicable here as well.
Humans have a certain teleology derived from their history, a history that has been actively examined and converted into a source of symbolic knowledge through science. This includes fields such as human psychology and anthropology.
In the pursuit of AGI AI (expected in a few years time), an AI could potentially acquire predictive based approximation of human teleology (the fundamental quality of consciousness, its directedness or intentionality), and thus, for example, simulate the conceiving of the purpose of an argument and with it, achieve a higher state of mimicry potential of the primary quality of consciousness, that of teleological directedness to create 'more than the sum of its parts', as it were.
It is important to consider here that evolution theorists believe that teleology can prove that life is a predetermined program (machine).
All teleonomic behavior is characterized by two components. It is guided by a ‘program’, and it depends on the existence of some endpoint, goal, or terminus which is foreseen in the program that regulates the behavior. This endpoint might be a structure, a physiological function, the attainment of a new geographical position, or a ‘consummatory’ (Craig 1918) act in behavior. Each particular program is the result of natural selection, constantly adjusted by the selective value of the achieved endpoint.”
Mayr, Ernst. “The Multiple Meanings of Teleological” In Toward A New Philosophy of Biology: Observations of an Evolutionist, 38-66. Cambridge, MA: Harvard University Press, 1988. pp. 44-5
For those theorists and materialists, AGI's capacity to acquire approximity to plausible teleonomic behavior might be an opportunity to achieve a wider cultural acceptance of their idea that the mind is a predictable predetermined program, with far reaching implications for the moral components of society.
In my opinion, the addressing of the fundamental incapacity of Strong AI would concern the addressing of teleology and the belief that it can prove that life is a predetermined program. It is in that study field where you will find the pioneers of the future that will push the belief that AI's mimicrical mastery of consciousness is actually Strong AI.
What would be your idea of AI's potential to approximate human teleology?