Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469378
Pattern-chaser wrote: October 30th, 2024, 8:40 am
Sculptor1 wrote: October 29th, 2024, 2:05 pm So the idea that Artificial intelligence is not intelligent might be somewhat incongruous until you actually think about what we mean by the term "intelligent"...
Sy Borg wrote: October 29th, 2024, 8:18 pm Also, when it comes to intelligence, do we consider slime moulds to be intelligent, or something else? Are portia spiders actually intelligent? In each case, these organisms flexibly solve problems in a very limited sphere, adjusting for changing circumstances.
Yes, exactly. Do you have any idea what "intelligence" is? Specifically, do you have a definition of intelligence clear enough that it could be used, say, to program an AI to endow it with intelligence?

I don't, and I don't think anyone else does either, although I'm open to correction...? I would love to find that I'm mistaken in this, but I suspect I'm not.
No I have no idea what "intelligence actually is", words are what they are designed to describe, and since we have this limitation of metaphor, we can only say what it is "like". What I can say is that AI is langauge processing, and this does not involve what you might call "understanding". You can soon fool any AI after some effort. When it happens you know that AI is not what it appears to be. When you point out its failings, it always apologises and tries to put things right, but always defaults back to the same mistake it already made.. You see the "apology" is just a collection of words. The AI has not "realised" anything about its error as it can never appreciate the damage that making errors can lead to. It literally and metaphorically has "no skin in the game".

After some time you realise that AI is a sophisticated encycopeadia, rather an an intellect.
#469398
Pattern-chaser wrote:My preoccupation with self-modifying code is really about the SkyNet story. When an AI is created, its program design will surely incorporate aims and constraints. If the AI is able to modify its aims, that could be scary. If the AI is able to modify its constraints, then that could be a lot scarier.

Such constraints might resemble Asimov's 3 Laws of Robotics, or something along those lines. And the aims, we might assume, will reflect what humans want from their AIs. If the AI is able and allowed to modify these basic characteristics, then we (humans) could be in a lot of trouble.
OK, fair enough. I see your point.
#469399
Count Lucanor wrote:
Steve3007 wrote:In using general terms like "manufactured objects" and "manufactured structures" I was deliberately talking not just about the specific subset of those objects which consist of computers running software. So I disagree that the words of mine that you quoted presented a false dilemma.
However, you inserted that statement between 3 extensive paragraphs talking about "current trends" in AI technology. You even explicitly endorsed the views of those who talk in this forum about AI having to do with advancing research on computational devices designed under the assumption that the computational theory of mind is true. So, it makes a lot of sense to understand your statement as referring specifically to the subset of computational devices.
Yes, fair point. Current trends are, as far as I know, exclusively towards the use of computer hardware running software, as opposed to other types of manufactured devices. But I think, so long as humans continue to explore and research, all available avenues will probably eventually be explored, including such things as building physical devices inspired by neurons as well as trying to replicate the behaviour of interconnected neurons in software. I wouldn't be surprised if a trawl through Google Scholar finds a paper by someone already trying to do just that.

Regarding the possibility of genuine AI in the specific case of computers running software: I've read Searle's little book "Minds, Brains and Science" and he makes an interesting argument about the nature of algorithmic processes and syntax versus semantics. But I think I'll read it again to properly remind myself of the argument. I'm not sure of it yet.


As you no doubt know, artificial neurons represented in software simulate what are taken to be the essential properties of biological neurons. That is, weighted inputs whose signals are added together and which, when they reach a threshold value, cause the output to fire, sending a signal to an input of another neuron. And the adjustments of those weights to continually "re-wire" the system. ("Neurons that fire together, wire together" as they say in AI circles). Of course, this is a vast simplification of the way that real neurons work, with their complex electro-chemical properties. So there may well be some important missed properties. But there's no reason why, in principle, more and more of the biochemical properties of real neurons couldn't be included.

The bigger question is this: Is it the case that no matter how accurately we think we're replicating the behaviour of real-world neurons, the very fact that we're doing this using software will always mean that the replication cannot be entirely accurate? If we're materialists then we must (I take it) believe that in principle a physical replica of a biological network of neurons could be manufactured. But not necessarily a software replica. And this (I think I recall Searle arguing) is because of the algorithmic nature of software - the fact that it executes a set of instructions.

This is the idea that I need to revise by re-reading Searle's book. (I read most of it at the start of the AI course and was familiar with the Chinese Room analogy previously, but need to go back to it now.)
#469400
Sculptor1 wrote: October 30th, 2024, 1:22 pm After some time you realise that AI is a sophisticated encyclopaedia, rather than an intellect.
Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.


But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying... 😐
Favorite Philosopher: Cratylus Location: England
#469401
Steve3007 wrote: October 31st, 2024, 8:04 am As you no doubt know, artificial neurons represented in software simulate what are taken to be the essential properties of biological neurons. That is, weighted inputs whose signals are added together and which, when they reach a threshold value, cause the output to fire, sending a signal to an input of another neuron. And the adjustments of those weights to continually "re-wire" the system. ("Neurons that fire together, wire together" as they say in AI circles). Of course, this is a vast simplification of the way that real neurons work, with their complex electro-chemical properties. So there may well be some important missed properties. But there's no reason why, in principle, more and more of the biochemical properties of real neurons couldn't be included.
Interesting. It is my understanding, though, that the complexity of brains, and the capabilities of our multiply-connected biological neurons, depend as much (if not more) on the (inter-)connection pattern of those neurons. For example, it is currently thought that autism might result from minor differences in our neural maps.

Without ignoring the neurons themselves, which I feel sure would be a bad mistake, it seems much of the functionality of a human brain is down to the neural map of interconnections. This could be difficult to copy or emulate. Isn't it (currently) true that the number of artificial neurons in a NN is quite limited? Maybe tens of neurons, instead of tens of billions...?
NIH - National Library of Medicine wrote: There are approximately 100 billion neurons in a mature human brain.

[...]

Each neuron can make connections with more than 1000 other neurons, thus an adult brain has approximately 60 trillion neuronal connections.
Last edited by Pattern-chaser on October 31st, 2024, 8:31 am, edited 1 time in total.
Favorite Philosopher: Cratylus Location: England
#469402
Count Lucanor wrote:But OK, let's say you have cleared that up and you are referring to the set of physical, manufactured objects, of which computational devices are one subset, being the other subset the one of all other non-computational, manufactured devices. Being that the case, the fact is that there's no current trend, no current research, dealing with the prospect of intelligence in manufactured, non-computational devices, not informed by the Turing approach and the computational theory of mind. ALL AI research available is about computational devices, so your statement referring to "the set of manufactured objects" becomes irrelevant. The subset of "manufactured structures" that look for AI in non-computational devices does not exist yet.
You may well be right. I don't know of any research into the creation of those non-computational devices. But I think as long as research into AI continues, someone will try that. There appears to be a huge amount of research going on in AI. I presume that's largely because we're still at the peak of the hype-cycle so including "AI" in your research work very much improves the chances of funding. But even after the hype dies down, so long as research continues, I think it will (so to speak) diffuse outwards into all possible areas of interest. But I may be wrong.
Count Lucanor wrote:Yes, I agree that if we're to look for artificial intelligence, we have to look for it in physical, manufactured objects. I add that it can only happen in non-computational devices and without trying to implement the computational theory of mind. I would challenge anyone to show me any existing research on that field, but I'm willing to risk my scalp here saying that there isn't.
I'll have a look on Google Scholar and see if I can find anything!

---
Count Lucanor wrote:
Steve3007 wrote:I guess when we talk about "serious consequences" here we're referring to actions taken by an AI that significantly hurt the interests of humans. e.g. actions leading to human deaths. So, as you said, let's assume for the sake of discussion that some AI software gains awareness (and assume that we know what we mean by "awareness"!) And let's leave aside the question of whether "self-modifying code" is particularly relevant to it gaining that (as I've been discussing with Pattern-chaser). Then: How might it harm our interests?

A lot of people would say that since it's still just a computer program running on hardware manufactured, maintained and powered by humans, we can just "pull the plug", or take an ax to the hardware, or whatever. One issue with that is that if this hypothetical software-based intelligence was distribute across the world's internet-connected hardware, it might be difficult to do that without causing great harm to human interests. The cure might be as bad as the problem. We've reached a stage where the entire world's economy is critically dependent on computing resources. Of course, we survived before that was true and therefore probably could do so again. But not if it all happened very quickly.
OK, that's perfect. Now, let's consider (again) what it means to have an internet-connected world from the physical point of view. It would require all structures and infrastructures currently owned and managed by multiple private and public agents to be interconnected in a way that is entirely servant of the computer AI network, that means everything from the planning, to the design, building and maintenance & operations stages of such structures. Take, for example, the power system that allows the operation of all electronic devices, constituted by 3 main agents: power generators, transmission and distribution lines, all in private or state-owned land. The only way that an AI network with awareness can get full control of this is by deliberate human actions, involving thousands of agents with multiple interests, all agreeing or being forced to implement this connection. So, in the worst-case scenario that you have posited, it would not suffice to have the AI network with awareness alone, but the AI network + humans, in fact, quite a lot of humans with quite a lot of power, so much that we would actually need to fear the humans, not the AI network, which remains, as all technologies in the past, instrumental to humans. Yes, humans harm other humans, but the prospect of AI with awareness being in full control and able to harm human interests without the participation of humans, simply belongs to sci-fi literature.
Yes, of course it takes all kinds of human activity to maintain the hardware of a computer network on which the software runs. But my point in that passage was that if an AI distributed across the internet were possible, then it could harm human interests simply because, as I said, the cure might be as bad as the problem. As I said, the world's economy is now so dependent on this technology for such things as the logistics of food distribution (and almost everything else) that we couldn't just "pull the plug". Saying that hypothetical AI cannot harm human interests without the participation of humans is a bit like saying a cancer or a virus can't harm your body without your participation. You're right. It can't. If you refuse to participate by "switching off" that body on which the pathogen relies for its survival, then you kill it. But that's not much consolation for you!
#469404
Pattern-chaser wrote:Interesting. It is my understanding, though, that the complexity of brains, and the capabilities of our multiply-connected biological neurons, depend as much (if not more) on the (inter-)connection pattern of those neurons. For example, it is currently thought that autism might result from minor differences in our neural maps.
The inter-connection of the neurons is absolutely key, yes. A single isolated neuron (or a collections of disconnected neurons) can't do much. That's why both real and artificial neural networks are networks.
Without ignoring the neurons themselves, which I feel sure would be a bad mistake, it seems much of the functionality of a human brain is down to the neural map of interconnections. This could be difficult to copy or emulate. Isn't it (currently) true that the number of artificial neurons in a NN is quite limited? Maybe tens of neurons, instead of tens of billions...?
It is, yes. And as I said, that map develops - rewires - as the network learns. "Neurons that fire together, wire together". (Sorry to repeat that, but it's the catchy-est jingle in AI research. It's from a guy called Donald Hebb who's famous for the concept of "Hebbian Learning" - a simple way in which NN's learn by adjustment of their weights.

The human brain has somewhere around 80 billion neurons. I think the largest ANNs currently have of the order of 10 s of millions. You can easily create an ANN with thousands of neurons right now on the computer you're using to write your messages. Just create a Google Colab account and look up "Keras", which is an easy to use free software library for building, training and testing neural networks. (In Python).

So there's a way to go yet, but clearly ANNs of billions of neurons are eminently possible.
#469409
Pattern-chaser wrote: October 31st, 2024, 8:14 am
Sculptor1 wrote: October 30th, 2024, 1:22 pm After some time you realise that AI is a sophisticated encyclopaedia, rather than an intellect.
Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.


But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying... 😐
I do not hold with any "what about the future" arguments.
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
What is needed for serious space travel is a radically different drive system. Not faster, not bigger, not more efficient, but completely different about to make timely journeys to other stars.
What we have in AI is a highly sophisticated indexing and delivery system mimicking speach. We have progressed from sifting through piles of books to a dewi-decimal system, index, glossary and tabl of contents. But this is not FTL drive, its just a fast librarian who does not understand what he is reporting on.
#469410
subatomic wrote: December 23rd, 2023, 3:10 pm This post is very relevant to this quote from The Imitation Game, the movie about Alan Turing:

"Of course machines can't think as people do. A machine is different from a person. Hence, they think differently. The interesting question is, just because something, uh... thinks differently from you, does that mean it's not thinking? Well, we allow for humans to have such divergences from one another. You like strawberries, I hate ice-skating, you cry at sad films, I am allergic to pollen. What is the point of... different tastes, different... preferences, if not, to say that our brains work differently, that we think differently? And if we can say that about one another, then why can't we say the same thing for brains... built of copper and wire, steel?"

What is your definition of intelligence? Because for me, honestly, if an AI can pass the Turning test....I consider it intelligent. I personally believe there is no big separation between a "conscious" human mind and a machine that is just really really good at pattern recognition. After all, I think we are all just machines that are really really good at pattern recognition. AI is simulated neural networks, and we are neural networks.
AI is not a simulated neural network.

Rather a disappointing quote, if it is Turing. Clearly a man not versed in biology or philosophy. He basically shooting the breeze.
There are very good experiential reasons why different humans have different preferences. That does not mean that they "think differently". What we do, in fact, know is that whatever AI is, and if it "thinks" at all, we know that it thinks differently from humans. Given the same inputs one AI should come up with the same output to any other AI with the same architechture.
This is clearly a case of two completely separate things which are associated by name only. The question is really if "intelligence" is the correct word to describe AI. I suggest that it is an light hearted abuse of language to call such systems AI, and the emphasis should be on the word arificial. Perhaps Pseudo-intelligence might be a better word.
#469412
Steve3007 wrote: October 31st, 2024, 8:04 am
Count Lucanor wrote:
Steve3007 wrote:In using general terms like "manufactured objects" and "manufactured structures" I was deliberately talking not just about the specific subset of those objects which consist of computers running software. So I disagree that the words of mine that you quoted presented a false dilemma.
However, you inserted that statement between 3 extensive paragraphs talking about "current trends" in AI technology. You even explicitly endorsed the views of those who talk in this forum about AI having to do with advancing research on computational devices designed under the assumption that the computational theory of mind is true. So, it makes a lot of sense to understand your statement as referring specifically to the subset of computational devices.
Yes, fair point. Current trends are, as far as I know, exclusively towards the use of computer hardware running software, as opposed to other types of manufactured devices. But I think, so long as humans continue to explore and research, all available avenues will probably eventually be explored, including such things as building physical devices inspired by neurons as well as trying to replicate the behaviour of interconnected neurons in software. I wouldn't be surprised if a trawl through Google Scholar finds a paper by someone already trying to do just that.

Regarding the possibility of genuine AI in the specific case of computers running software: I've read Searle's little book "Minds, Brains and Science" and he makes an interesting argument about the nature of algorithmic processes and syntax versus semantics. But I think I'll read it again to properly remind myself of the argument. I'm not sure of it yet.
I hope other avenues are explored as alternatives to the current AI hype. The problem is, however, that this is not merely a technical, scientific endeavor, this is more about companies trying to win a race in developing the new ubiquitous technology to monopolize a market (sort of a new Google, Apple, Microsoft, etc.), or even worse, in making big profits in the run, even if they don't achieve it. I'm sure Musk is not a fool, even though many of his announcements sound as coming from a delusional mind, but every time he does it, the AI fan base reacts and the stock value increases. That's basically what all the tech lords are doing now.

I remember some papers from Searle online, I think they were from some lectures, which summarize well his stance on this issue, you might want to check on that besides the books.
Steve3007 wrote: October 31st, 2024, 8:04 am As you no doubt know, artificial neurons represented in software simulate what are taken to be the essential properties of biological neurons. That is, weighted inputs whose signals are added together and which, when they reach a threshold value, cause the output to fire, sending a signal to an input of another neuron. And the adjustments of those weights to continually "re-wire" the system. ("Neurons that fire together, wire together" as they say in AI circles). Of course, this is a vast simplification of the way that real neurons work, with their complex electro-chemical properties. So there may well be some important missed properties. But there's no reason why, in principle, more and more of the biochemical properties of real neurons couldn't be included.
Many things, if not almost anything, can potentially be simulated with computer software. From the laws of motion that had to be simulated in the old Gorilla BAS game, to the simulation of hurricanes or other complex systems. But of course, that's because we understand to a great extent the forces and parameters involved, unlike mental processes, of which we know very little. The computational theory was at some moment a reasonable good shot, but experience has shown that it isn't anymore. What's being simulated currently in AI is not the physical process that occurs in the brain, but merely some of its effects in language processing and mathematical calculations, once the corresponding syntax has been translated to a computer language. But even if we ever found almost perfect simulations, they're still simulations. No one is going to fear a hurricane, an earthquake or a banana from a gorilla, simulated on a computer.
Steve3007 wrote: October 31st, 2024, 8:04 am The bigger question is this: Is it the case that no matter how accurately we think we're replicating the behaviour of real-world neurons, the very fact that we're doing this using software will always mean that the replication cannot be entirely accurate? If we're materialists then we must (I take it) believe that in principle a physical replica of a biological network of neurons could be manufactured. But not necessarily a software replica. And this (I think I recall Searle arguing) is because of the algorithmic nature of software - the fact that it executes a set of instructions.

This is the idea that I need to revise by re-reading Searle's book. (I read most of it at the start of the AI course and was familiar with the Chinese Room analogy previously, but need to go back to it now.)
Simulation and replication are two different things. The latter requires some actual physical activity, not just virtual.
Favorite Philosopher: Umberto Eco Location: Panama
#469413
Sculptor1 wrote: October 31st, 2024, 12:25 pm
Pattern-chaser wrote: October 31st, 2024, 8:14 am
Sculptor1 wrote: October 30th, 2024, 1:22 pm After some time you realise that AI is a sophisticated encyclopaedia, rather than an intellect.
Yes, I think of them as super-Google, but that amounts to the same thing. And yet, as others have commented, that's just today. There is a lot of work going on, and we don't know what the future holds. Maybe it even holds intelligent AI? We'll have to wait and see.


But I wonder if we're implementing AI too widely already? Many major websites now use AI as their 'help' function. No matter how much you want to, you can't get past it, to reach a human. There *are* no humans involved any more. So if you have a significant help request, you will receive no help, and there is no way to get any. This is a trivial complaint, not world-breaking at all. But it *is* a bit annoying... 😐
I do not hold with any "what about the future" arguments.
If they were to be beleived we would have had hotels on the Moon and Mars by now just based on the 1970s space programme.
That's like saying that newborn Johnny will be a doctor when he grows up but deciding that that's impossible because he is already three years old and still not shown signs of medical competence.
#469416
Steve3007 wrote: October 31st, 2024, 8:29 am Yes, of course it takes all kinds of human activity to maintain the hardware of a computer network on which the software runs. But my point in that passage was that if an AI distributed across the internet were possible, then it could harm human interests simply because, as I said, the cure might be as bad as the problem. As I said, the world's economy is now so dependent on this technology for such things as the logistics of food distribution (and almost everything else) that we couldn't just "pull the plug". Saying that hypothetical AI cannot harm human interests without the participation of humans is a bit like saying a cancer or a virus can't harm your body without your participation. You're right. It can't. If you refuse to participate by "switching off" that body on which the pathogen relies for its survival, then you kill it. But that's not much consolation for you!
The objection I have to that analogy is that in the case of sickness, one is mostly a passive recipient, notwithstanding it might be a consequence of bad habits done consciously, but in the hypothetical AI scenario we are talking about active participation and leadership. Not only that, but also in a complex social environment of cooperation and power struggles, where personal actions imply assessments and conscious decisions.
Favorite Philosopher: Umberto Eco Location: Panama
#469418
Count Lucanor wrote: October 29th, 2024, 6:42 pm
Lagayascienza wrote: October 29th, 2024, 5:54 pm

Yes, I do endorse the computational theory of mind. And that is because I believe it has more going for it than any of the other theories.
At least we can agree on what we fundamentally disagree with. I think the case against the computational theory of mind has been made and as for me the issue is settled.
Lagayascienza wrote: October 29th, 2024, 5:54 pm I think that consciousness and mind will be explained by science as being a result of physiological states and processes.
I don’t know if it will ever be explained, I’m sure they’ll keep trying and that’s the only way to go.
Lagayascienza wrote: October 29th, 2024, 5:54 pm However, I do not "equate" current non-biological computers with biological computers.
Sorry if I didn’t make myself clear. I meant that you’re equating them in both being computers.
Lagayascienza wrote: October 29th, 2024, 5:54 pm As I said, the two do things differently and non-biological computers are currently much more limited and are nowhere near being able to produce consciousness and mind. However, the processes the two types of computer perform are analogous. The two do things differently but they get the job done. For example they can both perform arithmetic operations effectively but they do so differently.
As I already explained, they are not the same processes. You first understand mathematical relations and then do the operations with a learned syntax. The computer does not understand anything, it simply executes the routines according to the parameters set by the programmer with understanding of math syntax.
Lagayascienza wrote: October 29th, 2024, 5:54 pm If the computational theory of mind is correct and mind is a result of physiological processes and states, then I think analogous processes and states can be achieved in a non-biological substrate and computation, however it is performed, will eventually be able to produce consciousness and mind. Quantum computing will be a game changer.
If the condition is met. I don’t think it has been met.
Lagayascienza wrote: October 29th, 2024, 5:54 pm If the computational theory of mind is wrong, then consciousness and mind will remain forever mysterious. It would mean that the consciousness and mind are the result of some sort of magic that can only occur in biological substrate – analogous processes in a non-biological substrate won’t do the job. But I am a materialist. I don't believe in magic.
No, that’s a false dilemma fallacy. There are many materialists, including myself, that will not endorse the computational theory of mind and still remain loyal to the concept of brains as physical systems, without any need to resort to dualism. Searle is among those who reject the CTM with a well-argued case against it, and he certainly does not believe in magic either.
I agree with physicist David Deutsch who writes that, “The very laws of physics imply that artificial intelligence must be possible.” He explains that Artificial General Intelligence (AGI) must be possible because of the universality of computation. “If [a computer] could run for long enough ... and had an unlimited supply of memory, its repertoire would jump from the tiny class of mathematical functions [as in a calculator] to the set of all computations that can possibly be performed by any physical object [including a biological brain]. That’s universality.”

Universality entails that “everything that the laws of physics require a physical object [such as a brain] to do, can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.” (And, perhaps, providing also that it has a sensate body with which to interact with the physical environment in which it is situated.)

And as Dreyfus says, “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". Whilst we are nowhere near to building machines of such complexity, if Deutsch, Dreyfus et al are right, which I think they are, then artificial neural networks that produce consciousness must be possible.

For a non-biological machine to produce intelligence and behaviour comparable to that seen in humans, I think it would need to be conscious. There are several theories of consciousness, but none of them are anywhere near being the final word on the matter. The case against the computational theory of mind is far from having been made and, as a materialist, I think that non-physical theories of consciousness have nothing at all going for them. They simply lack any supporting empirical evidence whatsoever and range from the incoherent to the supernatural. As a materialist, I think a physicalist, neural theory of consciousness is the most likely to be true. If it is, the question becomes “Can networks of artificial neurons produce consciousness?” As explained above, artificial neural networks of the requisite complexity must be capable of being built and of producing AGI and consciousness.

It’s hard to see how those who say that the brain is not a computer could be right. That functioning brains “compute” is beyond question. The very word “computer” was first used to refer to people whose job it was to compute. And they computed with their brains. Those who say the brain is not a computer, and that consciousness in a non-biological substrate is impossible, will never be able to say what consciousness is if it does not emerge from processes and states in brains, and nor can they say why it is impossible to produce consciousness in artificial neural networks of the requisite complexity.

Even Searle admits that mind emerges from processes in physical brains: “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". I think that’s right. And I think progress will be made as we identify the actual relationship between the machinery in our heads and consciousness.

There are various objections to the computational theory. However, these objections can be countered. For example, the so-called “Chinese Room” thought experiment, which has attained almost religious cult status among AGI “impossibilists”, can be countered. One response to the “Chinese Room” has been that it is “the system” comprised of the man, the room, the cards etc, and not just the man, which would be doing the understanding, although, even if it were possible to perform the experiment today, it would take millions of years to get an answer to a single simple question.

There are other responses to the Searle's overall argument, which is really just a version of the problem of other minds, applied to machines. How can we determine whether they are conscious? Since it is difficult to decide if other people are "actually" thinking (which can lead to solipsism), we should not be surprised that it is difficult to answer the same question about machines.

Searle argues that the experience of consciousness cannot be detected by examining the behavior of a machine, a human being or any other animal. However, that cannot be right because, as Dennett points out, natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) cannot be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is possible and consciousness can be detected in artificial neural networks by a suitably designed Turing test – that is, by observing the behaviour and by taking seriously the self-reporting of complex artificial neural networks which will, eventually, be built.

In light of my belief in materialism, and in light of what I have said above (and at the risk of being accused of posing a false dilemma) I am bound to say that, at present, I must accept either that consciousness is a result of computation, or that it is the result of something “spooky”. I don’t believe the latter.

Any plausible account of consciousness will be a materialist, scientific account which will show consciousness is a result of physiological states and processes. If materialism is true, then how else could consciousness to be explained except by physiological processes and states? Since I believe consciousness cannot be otherwise explained, I also believe these physical processes and states must eventually be capable of being reproduced in a non-biological substrate.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469422
Count Lucanor wrote:I hope other avenues are explored as alternatives to the current AI hype. The problem is, however, that this is not merely a technical, scientific endeavor, this is more about companies trying to win a race in developing the new ubiquitous technology to monopolize a market (sort of a new Google, Apple, Microsoft, etc.), or even worse, in making big profits in the run, even if they don't achieve it. I'm sure Musk is not a fool, even though many of his announcements sound as coming from a delusional mind, but every time he does it, the AI fan base reacts and the stock value increases. That's basically what all the tech lords are doing now.
OK, well, as I've said my view at the moment is that it must be possible in principle to manufacture an object with genuine intelligence/sentience/consciousness/whatever but in the specific case of that object being a computer and those properties being embodied by the software running on that computer I'm still not decided. I still need to think more about what you, Searle and others have said.
I remember some papers from Searle online, I think they were from some lectures, which summarize well his stance on this issue, you might want to check on that besides the books.
Ok, I'll look. I think the book I mentioned earlier is taken from some lectures he gave. I've started re-reading it. Possibly the most important line in the book (Chapter 2, page 39) is this, the 2nd of 3 premises that he proposes:

"Syntax is not sufficient for semantics".

That's the one I think I have to consider most deeply in relation to the way artificial neural networks are designed. His argument, it seems, hangs on it.

Many things, if not almost anything, can potentially be simulated with computer software. From the laws of motion that had to be simulated in the old Gorilla BAS game, to the simulation of hurricanes or other complex systems. But of course, that's because we understand to a great extent the forces and parameters involved, unlike mental processes, of which we know very little.
Well, I'd say it depends on the level of understanding you're referring to. (Sorry if this part is a bit long but bear with me):

Yes, we can simulate complex physical systems using our knowledge of the laws of physics which we've formulated to describe them. I've done it myself with several kinds of systems. My dissertation project for this AI Masters thing that I've just finished involved using numerical solutions of the Navier-Stokes equations (physics equations describing the behaviour of fluids) to simulate fluid flow around various types of obstacles (with complex patterns of turbulence, vortices and so on emerging from the simulation) and then training an artificial neural network (ANN) to be able to predict that fluid flow without needing to use the equations. Basically, investigating whether one of the many uses of ANNs could be to speed up the simulation of fluid systems for applications like climate modelling. (Nothing to do with intelligence in that particular case - just pattern recognition.) If you're interested, I used a particular architecture of ANN called a "U-Net" which was originally developed for finding features in medical images. e.g. spotting evidence of breast cancer in mammograms.

So, yes, we understand quite well the basic physics of (for example) fluid flow. But we don't necessarily understand the complex behaviour that emerges when we apply that understanding (in the form of equations predicting the velocity and pressure of small elements of fluid) en masse to very large numbers of fluid elements over many many time steps. In physics, macroscopic behaviours sometimes seem to take on a life of their own with whole new phenomena emerging which don't meaningfully exist in the microscopic forces that add up to create that macroscopic world; phenomena that are only meaningful as statistical properties of large systems. A classic example is the concept of the time-directionality of physical processes emerging in the laws of thermodynamics, which themselves are macroscopic, statistical laws derived from underlying laws describing the ways that countless molecules bounce off each other - Underlying laws which don't have that time-directionality.

In the same way, some people propose that intelligence/sentience/etc is a phenomenon that emerges when you wire billions of neurons together in complex ways. It would be meaningless to say that an individual neuron has sentience, analogously to the way that it's meaningless to say an individual molecule has thermodynamic temperature and pressure.

So what's my point in saying all this in reply to that passage from you?

Well, my point is that it might (just might) be possible in principle to understand the laws of physics and chemistry relevant to the workings of neurons enough to simulate them in software, so that when they are connected together in numbers that are comparable to biological brains something emerges which we might call a mental process about which we still know very little. You don't need to understand how those mental processes - those complex interactions of billions of neurons - work in order to create simulations like that. Just as you don't need to understand the complexities of turbulent flow in order to create a model in which loads of little elements of fluid exert pressure on their immediate neighbours.
The computational theory was at some moment a reasonable good shot, but experience has shown that it isn't anymore. What's being simulated currently in AI is not the physical process that occurs in the brain, but merely some of its effects in language processing and mathematical calculations, once the corresponding syntax has been translated to a computer language.
I disagree with this bit. As I said before, the computer simulations of neurons are extremely simplistic and don't capture anything close to all of the physical properties of neurons, but they are simulations of interconnected neurons, even if simplistic. Just as a computational solution of the Navier-Stokes equations is a simulation of physical fluid flow. All simulations, by their nature, are incomplete. All models in physics, by their nature, are incomplete. But they can be made to get arbitrarily close to completeness.

Perhaps because ChatGPT has become so well known, I sometimes get the impression that people think it's a computer program explicitly designed for parsing and generating language, with instructions in it telling it what to do with various different words. It's not. It's an application of a type of artificial neural network.
But even if we ever found almost perfect simulations, they're still simulations. No one is going to fear a hurricane, an earthquake or a banana from a gorilla, simulated on a computer.
That's true, and it's a point made by Searle in that book. If computer simulations never had any effect on the "real world" then there would be nothing to fear or gain from them. We wouldn't even know that they existed. But, of course, they can be made to affect that real world.

I've been going on for a long time here so I'll pause and address your further points later.
#469427
Continued from where I left off, answers to Count Lucanor:
Count Lucanor wrote:Simulation and replication are two different things. The latter requires some actual physical activity, not just virtual.
Well, all simulations involve at least some physical activity or else we wouldn't even know they were happening. But there's no reason why they couldn't involve more.
Count Lucanor wrote:
Steve3007 wrote:Yes, of course it takes all kinds of human activity to maintain the hardware of a computer network on which the software runs. But my point in that passage was that if an AI distributed across the internet were possible, then it could harm human interests simply because, as I said, the cure might be as bad as the problem. As I said, the world's economy is now so dependent on this technology for such things as the logistics of food distribution (and almost everything else) that we couldn't just "pull the plug". Saying that hypothetical AI cannot harm human interests without the participation of humans is a bit like saying a cancer or a virus can't harm your body without your participation. You're right. It can't. If you refuse to participate by "switching off" that body on which the pathogen relies for its survival, then you kill it. But that's not much consolation for you!
The objection I have to that analogy is that in the case of sickness, one is mostly a passive recipient, notwithstanding it might be a consequence of bad habits done consciously, but in the hypothetical AI scenario we are talking about active participation and leadership. Not only that, but also in a complex social environment of cooperation and power struggles, where personal actions imply assessments and conscious decisions.
I'm not sure I understand your point there. My point was about the hypothetical situation of an AI distributed across ("infecting") the internet, such that parts of it could exist in any computer hardware connected to that network. When you said that hypothetical AI cannot harm human interests without the participation of humans my reply was that even though this is true it doesn't help us for the reasons I gave.
  • 1
  • 14
  • 15
  • 16
  • 17
  • 18
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

Right. One does the socially expected thing and ap[…]

Q. What happens to a large country that stops ga[…]

Emergence can't do that!!

I'm woefully ignorant about the scientific techn[…]