ConsciousAI wrote
You say that AI's ability to create philosophy - to determine a path forward and to control its own evolution - is a goal.
What is the reasoning behind the idea that it is possible to consider it a goal that an AI can do that? This is a philosophical question that demands an answer and the problem that it addresses doesn't need to have anything to do with AI per se.
An interesting thought experiment: The question is, what is it about AI that would prohibit something that lies within human possibilities, including the capacity to for self modification---calling it evolution, at this point, just complicates this more simple matter. Evolution without a teleology is just modification for adaptation, and adaptation is reducible to continuity coupled with pragmatic success, and pragmatic success always begs the value question: to what end? Can AI have an "end"?
Of course AI can have an end, a goal, a purpose, as long as one conceives of such a thing a language phenomenon. Of course, we already assume AI is not organic, and it certainly does not have the physical constitution to produce consciousness like ours; it would be like saying iron can produce the same properties as water vapor. But on the other hand, if AI could become an agency that produces language, takes an internal system of symbolic dialectics, with conditionals, negative and positive assertions, and so forth, and this, as with us, is part of an inner constitution, an AI psychology, if you will, that possesses a pragmatic interface with function for dealings with the exterior demands, then IN this interiority would be able to self improve, self modify, correct, and the like. As well as generate its own "ends". For all of this is done in a language matrix.
It has to be realized that this would certainly not be like us. But we can imagine mechanical features delivering through a mechanical body, electrical steams of "data" that could be released into a central network in which these are "interpreted" symbolically and in this symbolic system, there is analysis and synthesis and all of the complexity of what we call thought.
And so on. Just a rough idea, but to me, expresses the an essential part of what it would take to make AI a kind of consciousness. Consciousness being an interior "space" where thought and its symbols and rules gather to produce a "world". It would be a kind of Compu-dasein.
Of course, the two major questions of philosophy still abide: Why are we born to suffer and die? and, How does anything in the "out there" of the world get in the "in here" of a brain? Compu-daseins will not care or experience affectivity; but as with us, the problem of knowledge about the world abides: Brain things cannot "know" other things. Causality cannot generate an epistemology.