Steve3007 wrote:I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.
Count Lucanor wrote:First, that’s a false dilemma. As once noted by Searle, this argument (considering its context) implies that the question of whether the brain is a physical mechanism that determines mental states or not, is exactly the same question of whether the brain is a digital computer or not. But they are not the same question, so while the latter should be answered with a NO, the former should be answered with a YES. That means one can deny that computational theory solves the problem of intelligence, while at the same time keeping the door close to any dualism of the sort you’re talking about.
In using general terms like "manufactured objects" and "manufactured structures" I was deliberately talking not just about the specific subset of those objects which consist of computers running software. So I disagree that the words of mine that you quoted presented a false dilemma.
I think as a first step in thinking about the possibility or otherwise of genuine artificial intelligence, we all ought to be able to at least agree that it makes no sense for a non-dualist/materialist to say that features such as intelligence, consciousness, emotions, etc
could never exist in manufactured objects (as opposed to naturally evolved biological objects). After agreeing that, we can go on to talk about specific subsets of those objects. Maybe we all agree with that already, so maybe you think I'm attacking a straw man. But reading through a lot of the posts in this topic it doesn't appear so. Although, from your words above, you, for one, do appear to agree with it.
Secondly, even though trying to emulate brain operation stays within the problem of emulating a physical system, human technical capabilities are not infinite, so we can’t predict it will happen. Now, if researchers committed to achieving that result were focused on that goal, even if they had to discard trending approaches that do not actually work, so as to try with other technologies, we could at least hope that they will achieve it some day, but the fact is that they’re only trying the path set by Turin and others, that is, the path of the computational theory of mind. That path is a dead end, it doesn’t take us to where is promised.
Yes, they're not infinite. But as I said, I don't think the human brain (for example) is infinitely complex. Very, very complex for sure, but not infinitely so. So, as I said, we must surely accept that if manufactured objects can be made to increase in complexity with time, then such objects could be as complex as human brains a finite time into the future.