ConsciousAI wrote: ↑December 24th, 2023, 11:20 am
amorphos_ii wrote: ↑December 17th, 2023, 11:49 am
Is AI ‘intelligent’ and so what is intelligence anyway?
I will keep this simple to begin with…
if I had a sheet of paper with some answers upon it, then someone asked me a question, I then looked through the list of answers and found it, that does not mean I am intelligent.
So searching for answers from a list or from memory is I would argue, not intelligent. AI is not thinking et al.
A machine or software which uses algorithms and scripts, is in a roundabout sense mechanistic. Which also is not intelligent.
Should AI be called something else other than ‘intelligence’ to be correct.
_
In my opinion there is a great risk that the cognitive science movement that poses that mind is a product of deterministic computational processes in the brain, paired with the growing culture of materialism, will pose that AI's capacity to empirically mimic human consciousness, implies that it is conscious.
What would it take to deny the claim that a sufficiently advanced AI is sentient? It would concern metaphysical philosophical theory, versus empirical evidence.
Teleonomy, a theoretical concept that states that life is a product of a deterministic program, is the frontier of AI consciousness. Teleonomic AI can be achieved through science.
All teleonomic behavior is characterized by two components. It is guided by a ‘program’, and it depends on the existence of some endpoint, goal, or terminus which is foreseen in the program that regulates the behavior. This endpoint might be a structure, a physiological function, the attainment of a new geographical position, or a ‘consummatory’ (Craig 1918) act in behavior. Each particular program is the result of natural selection, constantly adjusted by the selective value of the achieved endpoint.”
Mayr, Ernst. “The Multiple Meanings of Teleological” In Toward A New Philosophy of Biology: Observations of an Evolutionist, 38-66. Cambridge, MA: Harvard University Press, 1988. pp. 44-5
Teleonomy is the theoretical cradle of evolutionary theorists.
When lower life is a mere deterministic program, then consciousness must be so as well, and that would imply that AI can achieve it through technological advancement.
An example reasoning by psychiatrist Ralph Lewis M.D. a few days ago on Psychology Today that shows what to expect when AI advances:
"In principle, it may be possible to engineer sentient AI. Listed below are some of the characteristics that are probably necessary for something to be sentient."
When sufficient characteristics are met, how would it be possible to argue that AI is not sentient? Science relies on empirical evidence.
We don't know what the necessary and sufficient conditions are for conscious experience are or even know how to find out - we can't assume AI can or can't in principle be conscious. That also means we can't test for experience with some consciousness-o-meter, so eg if AI is designed to fool us well enough, it probably will. I haven't played with ChatGPT myself, but some everyday chat bots are hard to spot now.
Teleonomy - this implies some inherent goal purpose as you say. Purpose is something which as far as we can tell only conscious critters have. If there is purpose built into the fabric of everything, including computer circuitry, computers are already conscious to some degree, as is a carrot, a rock and a proton. That's a very different type of fundamentally experiential universe than the one which physicalists building computers are operating within. And again, impossible to know or test.
When lower life is a mere deterministic program, then consciousness must be so as well, and that would imply that AI can achieve it through technological advancement.
This implication uses the apparently contradictory hypothesis that experience is associated with biological living things. And as the complexity of the physical substrate of living things increases, more complex experience emerges. Evidence supports that once some biological living thing is conscious, its experiential complexity correlates to its physical neural states. But you can't make the initial assumption that a biological substrate doesn't contain some necessary condition. Also not all living things have neurons, and it's specifically neural correlation which gives us reason to believe that complexity plays a role in the type of experience which somehow manifests in brained livings things.
Another point - silicon based experience, if possible, might be radically different to carbon based experience if the nature of the substrate is relevant, rather than just patterns of any old stuff interacting. We can't even know what it's like to be a bat with sonar, never mind a box of circuitry 'fed' by electricity, switched on and off, immobile, prone to rust and dust, blind, deaf, with inconceivable access to information. Why would we think that 'something it is like to be a computer' is comparable or even recognisable to a human...
All that is to say - there's a lot of necessary speculation involved here. For now, I'm more worried about the people controlling computer development, who are mostly into being egomaniac billionaires from what I can tell. Musk is a more pressing warning to us. But yes, we're potentially playing with fire if AI can become conscious, it's a big step into the unknown in unforseeable ways.