Lagayascienza wrote: ↑October 24th, 2024, 11:40 pm
Count Lucanor wrote:But being capable of, does not imply doing it the same way. Outperforming intelligent organisms in completing tasks is not the same as outperforming organisms in cognitive abilities. It is without question that the "smartest" computers don't understand anything, they just perform fast, automated (mechanistic/digital) calculations, but the implication that "understanding" and "reasoning" are simply a factor of such calculations, so that the more calculations, the bigger chances of reasoning and understanding emerging, is very highly debatable. It's the debate against the computational theory of mind.
To my mind, the computational theory of mind has more going for it than any other. However, the brain, it's processes and emergent phenomena, are very difficult to study. The problem is we are studying the thing we want to understand by using the thing we want to understand. This creates a feedback loop or hall of mirrors effect which in turn creates a lot of meaningless and confusing noise.
While I am far from an expert in this field, I have to say that, based on the reading I have done, I agree with the likes of Dennett, Fodor, Marr, Neiser, Pinker and Putnam. The brain appears to be is a biological computer and the computational theory makes the most sense to me. Of course that dos not make it true. However, I have looked at the criticisms of the computational theory of mind and, as far as I can see, they can be countered. Whereas, all the other theories have shortcomings which cannot be addressed. There is a simple account of the computational theory of mind on Wikipedia that is worth reading and which provides references to much of the relevant literature. Some of these I have not read yet. After I look at those unread references I will revisit the other theories of mind. It will take some time but I will then write a summary of how the theories compare.
Thanks for the interesting discussion thus far.
I'm with Searle and others on this, the mind cannot be simply a manipulator of symbols. I haven't found any satisfactory reply to the challenge offered by the Chinese Room Argument, nor to Bender's octopus test. If that's true, then a computational device cannot be a real mind (strong AI), and then AI will not be like a biological mind ever, and it is not the right model of the biological mind either (weak AI). It becomes even more problematic when we are told, following Turing, that we don't even need to understand brain to understand mind, as if mind could be treated as not being an emergence of biological processes in living bodies.
The prospect of AI as promoted by the tech gurus is that either weak or strong AI will progress from a non-conscious state to a conscious state as a result of the exponential increase of algorithmic calculations, implying the emergence of qualitative properties from that quantitative order. But there's more to that, since it is also often implied, and sometimes openly claimed, that this conscious state is enough for computer machines to acquire intentionality and the ability to operate autonomously in the world as real free agents, creating the unpredictable, open-to-all-possibilities scenario of machines developing organically as any other living species. The "intelligent explosion". For this
AI-as-real-intelligence to be true, other assumptions also need to be true concerning sentience, agency, life, social behavior, mind-body relation, etc., all of which are also highly disputable.