Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#470915
Count Lucanor


Obviously brains interact with the rest of the body, which in turn interacts with the environment in unimaginably complex ways. We could say that is how consciousness works, you need the entire biological body and everything it interacts with to create a conscious entity. It gets you nowhere nearer understanding why or how though. So some reductionism is a sensible way forward, until you hopefully arrive at the necessary and sufficient conditions for consciousness.
Some reductionism, maybe; any reductionism, not really.


And on what basis, what criteria, would you choose what reduction could and could not capture the necessary and sufficient conditions for consciousness?

A systemic, holistic approach, is always necessary to understand complex systems, especially if you are trying to replicate those systems. The brain is part of the nervous system, so you cannot dismiss the rest of the nervous system to tackle the problem of consciousness. You cannot reduce consciousness to the work of neurons when there are other things going on, and when there are neurons working in processes not related to consciousness.

The thing is, even if we understood every aspect of how a whole physical human body works and how it physically interacts with the environment, Physicalism (currently at least) still wouldn't be able to explain conscious experience. Phenomenal experience wouldn't even be part of that physicalist description. (see Levine's Explanatory Gap).

So we have no idea what the necessary and sufficient conditions for conscious experience are, or if rocks and toasters and coral have it. Or how far we can reduce the components or interactions in order to retain the necessary and sufficient conditions for conscious experience.

Gertie wrote: ↑December 15th, 2024, 10:28 am The focus is on neurons because of the correlation between specific brain states and specific experiential states. Probably the biggest clue we've discovered in trying to understand how consciousness manifests.
But neurons are not only in the brain, they are all over the nervous system. That includes the parasympathetic nervous system, which controls unconscious processes. Experiential states are body states.
''Experiential states are body states'' is a monist Physicalist theory of mind, not established fact. It could be right. But it still doesn't tell us whether or not other substrates are capable of conscious experience.

What we do know via human reports is that specific neural interactions in humans manifest specific experiential states. Hence neural correlation is probably our biggest clue in narrowing down the nec and suff conditions.
Gertie wrote: ↑December 15th, 2024, 10:28 am
It's the fact that we don't know the necessary and sufficient conditions for consciousness which results in us not knowing if AI can be conscious. And we have to rely on similarities to humans to guess whether a system can be conscious. (Eg chimps probably are, but it's less likely coral is).
It seems obvious that if we don’t know how consciousness is naturally produced, we can’t know how to produce it artificially.
Right.
It is often claimed that neural networks are the sufficient and necessary condition, the key model of how the brain works. If we can find a neural network in corals, but when comparing to ourselves, and making our best guess, we conclude that corals are less likely to be conscious, then it follows that neural networks are likely not sufficient to produce consciousness. Generative AI and LLMs are entirely based on simulated neural networks, so we can also be pretty confident on AI not having the sufficient conditions to be conscious, no matter how sophisticated the simulated neural network.
A degree of sophistication and complexity might also be necessary. Or coral might be conscious. We assume it's not only because of its physical and behavioural dissimilarity to humans who we believe are conscious. We believe humans are conscious because we are ourselves, and other humans are much like usphysically and behaviourally, and tell us they are too. That's how ignorant we are. That's as far as Physicalism has gotten us, perhaps as far as it can get us (see Chalmers' Hard Problem).

That's why scientists latch on to clues like neural correlation, and the incredibly complex nature of human brains.

Gertie wrote: ↑ December 15th, 2024, 10:28 am It might be that any cell equipped in other ways than axons and dendrites could do what neurons functionally do (exchanging neurotransmitters) when interacting in …
… to manifest conscious experience. It might even be that neurotransmitters aren't necessary, any matter would work, because it's the nature of the interactions themselves which are necessary and sufficient.
You appear to be pointing to neural connections, or any sort of network doing the function of a neural network, but I have already addressed the issue of neural networks. They are not sufficient to produce consciousness, although most likely necessary. It’s simply not true that the nature of the interactions between neurons is what gives the necessary and sufficient conditions for consciousness.

see above.
Gertie wrote: ↑December 15th, 2024, 10:28 am The complexity of the brain hints that complex interactions are relevant, maybe even enough. Where-as neural correlation hints that there's something special about neurons which is necessary, in certain configurations.
The brain of the fruit fly has just been mapped. 140,000 neurons and 50 million connections. I don’t know if that’s complex enough, but the question is: is the fruit fly conscious? Because if not, there’s more to consciousness than neurons. Why then insist on connectionism to explain consciousness? The answer is: because you can then equate mind to computers and keep nurturing the paradigm of the computational theory of mind.

(I told you already that I don't find the computing analogy helpful, and why. Per Physicalism, 'computing information' can only be an abstract metaphor. Unless we live in a universe where 'information' is an existing 'thing in itself', which is not Physicalism as it's generally understood.)

Neural networks comprise matter in motion. We assume that a dead brain isn't conscious, so the nature of the configurations seem to be necessary too. If it's the case that coral and fruit flies aren't conscious, then it's likely that more complexity is required, which might include many of the physical features much more complex humans have as they integrate so many more complex neural subsystems.

Maybe as more complex and mobile species evolved, integrating neural subsystems and prioritising which neural interactions are required from moment to moment might hit some level of intensity/complexity which is the fuel for consciousness.

These things are worth considering and persuing, as hypotheses.

Where-as simply saying it takes a whole human interacting with her environment to manifest conscious experience is both unhelpful and unlikely imo.

So how do you justify such a 'holistic' position in the face of our ignorance? How do you know AI can't be conscious?
Gertie wrote: ↑December 15th, 2024, 10:28 am Nobody knows. But a way to narrow down the necessary and sufficient conditions for consciousness is to build a machine using a different substrate to mimic the complex configurations of human brains. Try that similarity out. Then try to come up with a way to test if it has the necessary and sufficient conditions (noting computers can already pass the Turing Test).
This isn't a wacky approach.
It would be wacky indeed trying to build a machine to replicate something you don’t understand how it works in the first place.
Really? I could build a mouse-trap without understanding its physics.
But that has not been the approach anyway. Ever since Turing, the biophysics of consciousness is irrelevant, what matters is how you can produce something that resembles the observable behavior of conscious (or intelligent) beings. The best shot was any algorithmic process implemented through a machine (from
analog to digital). If you pass that (the Turing test), then you are conscious (or intelligent). This theoretical framework, however, has been shown to be flawed. No GenAI or LLM has one bit of consciousness. The hope that so called “scaling laws” would demonstrate that from bigger computational power, consciousness would emerge, is starting to fade away. GenAI and LLMs have been hitting “the wall”, as was predicted by a few skeptics a couple of years ago. Computation is simply not the path to consciousness.
Maybe. You assert something you can't know. In the face of our ignorance the most reasonable approach is to mimic what we practically can, and see what happens.
#470917
Lagayascienza wrote: December 18th, 2024, 1:55 am Count Lucanor, neither of us are neuroscientists or computer scientists. We seem to have some differences of opinion. But mostly we are taking past each other.

There are a few things that need to be said about your most recent post. Firstly, nowhere did I “insist” that those articles in the journal, Nature, were the current state of play. I said that they “discuss” the current state of play. And they do so quite well, IMO.
What I meant by “insistence” is that the post was the latest in a run of posts assuming that we’ll find in the latest talk of neuroscientists the definitive insight that is needed to settle or at least guide the discussions we are having on the issues of intelligence and AI. The current state of play is actually full of disputes from different approaches, theoretical frameworks, etc., plus the cultural narratives that influence the technological practices and political and economic interests. Sci-fi literature and movies, the narratives and promises of the tech lords, all of this come into play, too. The use of certain language, for example, is not neutral, purely technical, so it is my view that if you’re going to make an assessment of the current state of technology, you have to separate reality from discourse, given that empirical research on intelligence and consciousness is at a stalemate, giving a lot of room to speculation and unfalsifiable theories. Of course one will have to get acquainted with new studies and theories, but also keep an eye on how the subject is handled by journalists and media in general. There’s where the “hype” is. I mentioned previously in this thread the recent book “ AI’s Snake Oil”. If you want to save time, you can go directly to Adam Conover’s Factually Youtube podcast, where he interviews Narayanan and Kapoor. Once there, you might want to check Sabine Hossenfelder’s views on AI, too. And you can look into Gary Markus’ substack. This is the other part of the discussion that is doing well, too.
Lagayascienza wrote: December 18th, 2024, 1:55 am And neither did I say that artificial neural networks are “modelled like” natural nervous systems and nowhere in those articles was such a thing said.
I said in a previous post, commenting on an article referenced by Pattern Chaser, that it has become a consistent assertion in all literature about neural networks, presumed to be true, yet it is false. It was not directed at you, and now I’m not saying that you’re saying it, but since you’re constantly mentioning neural networks, you should not be missing it. BTW, the article you referenced, does make such assertion (8th paragraph in the article).
Lagayascienza wrote: December 18th, 2024, 1:55 am Secondly, I am well aware of the meaning and history of the term “compute”. You have a problem with the term but no neuroscientist I’ve read has a problem with the term compute in relation to what goes on in organic neural networks.
I will have to dispute that. As I said previously, there’s a double metaphor permeating this subject. First, the field of computers adopted the language of mental operations and human behavior, so we talk about memory, information processing, computation, etc., and then reverted that talk, filtered by the conception of computing machines, to the field of neuroscience.
Lagayascienza wrote: December 18th, 2024, 1:55 am Furthermore, computation is not merely about arithmetic operations. When you aim your tennis racket at a fast-moving tennis ball your brain is performing many computations that enable you to hit that ball.
No, it’s not. The term computation here is simply extrapolated from its use in machines, but as I explained,
machines perform automated tasks based on instructions fed by humans. Humans figured out ways to solve problems with a tool called mathematics, which involved manipulation of symbols with a formal language or syntax. Then they coded these operations as signals put into the mechanisms of analog computers and later, digital ones. But there’s nothing suggesting that brains are doing the same, nor that machines are actually performing any type of mental operation.
Lagayascienza wrote: December 18th, 2024, 1:55 am Organic neural networks indisputably compute. Artificial neural networks also compute.
Quite disputable, actually.
Lagayascienza wrote: December 18th, 2024, 1:55 am In relation to language and arithmetic operations, organic and artificial neural networks do things somewhat differently but get the same or similar results. You keep tripping-up over the word “compute”. I think it is a red herring.
You insist on comparing virtual simulations to real nervous
systems, eliminating important differences, and calling them both neural networks, which makes them look closer to each other than what they really are. Getting similar results just shows how humans have built these systems to perform operations as the coded instructions required.
Lagayascienza wrote: December 18th, 2024, 1:55 am In the discussion of artificial intelligence you often point to the “problem of consciousness”. However Consciousness and intelligence are two different phenomena which may, however, be linked, especially in animals with a neocortex. I have repeatedly said that current LLMs are not conscious or even close to it.
I’m forced to use both concepts, given that NO ONE has been yet able to define what is it that they’re looking for when dealing with AI technology, and they often use the term interchangeably. Not my fault. It seems obvious, though, that consciousness (understood as a qualitative dimension of a living agent’s experience) is a prerequisite for intelligence (understood as the ability to self-reflect and act on your living experience), so, lack of consciousness in machines implies lack of intelligence.
Lagayascienza wrote: December 18th, 2024, 1:55 am Fourthly, those very recent articles in the respected journal, Nature, were not “hyped”. Nature doesn't do hype. Did you even read the articles? The message in the articles was that current AIs are not consciousness and do not exhibit human-level intelligence. They also discuss how far away current AIs might still be from that. And that is all. There was no hype.
Yes, I read one of them. No new insights. Again, I was not saying the hype is in those articles. The hype is in the tech subculture that manipulates the subject, so it is part of the discussion that cannot be missed. The problem with those articles is the same as what I see in this forum, taking the following stance: “we are still far away, but on the right path”.
Favorite Philosopher: Umberto Eco Location: Panama
#470922
Count Lucanor, I got the book, Snake Oil. It doesn't say anything I don't already agree with. It does not deal in any depth with definitions of intelligence or with consciousness. It tells us what the various current AIs can and can't do and is, rightly, dismissive of the sales hype around what they are capable of, and warns of the dangers of misuse.

It is still not clear to me what your own position is in relation to intelligence and consciousness. I'm fine with the usual definitions of intelligence such as the one at Wiki. We know intelligence when we see it. It is what we are. However, consciousness is a much more difficult issue that is not going to be resolved here. In fact, if it ever is resolved, I believe it will be through science and not philosophy. Science is informed by philosophy but philosophy only theorizes and often gets bogged down in conceptual analyses and hair splitting. Philosophy cannot not do the practical scientific research that will be required to demystify consciousness. It is going to take hands-on neuroscience and computer science for artificial general intelligence to ever become a reality. If it ever does. I think it could become a reality if we can work out what is going on in brains like ours in more detail. Once we understand it, we can build it.

The bottom line for me is that current AIs do not have human-level intelligence. And they are nowhere near being conscious. Maybe human-level intelligence requires consciousness. I think an AGI would need to be conscious to do the full range of things that a human can do. It would also need to be housed in an artificial body with a sensory array and neural network of a similar complexity to our own. But an intelligence housed in a computer could do some of the mental stuff we can do. In fact, some AIs already can do some of what we can do. I think the concept of “computation” describes what goes on in both natural and artificial neural networks. Computation is just performed differently in the two substrates.

Anyway, I’d be interested to hear your definition of intelligence? And also what you think consciousness is. If you could provide these, we could then discuss why you think intelligence and even consciousness cannot be produced in an artificial substrate. If that is what you do, in fact, think.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#470945
Pattern-chaser wrote: December 17th, 2024, 9:33 am
Empiricist-Bruno wrote: December 15th, 2024, 5:01 pm Now, your question here screams, "I don't understand that" so, I'm going to go a little deeper in that concept because it's becoming clear through your question that your grasp of paradoxes isn't there.
That's all very well, but this is a topic about AI, not paradoxes. 👍
Ok, and what if AI happened to be a paradox?

In the opening post, there was no mention that the point of view of AI as a paradox needed to be dismissed. No one narrowed down the topic to exclude this view point. The person who created this thread seemed to be genuinely interested in all or any possibility.

Anyway, narrowing down the topic this way would would have meant to exclude the truth from this thread and since you are concerned by the truth, you wouldn't be happy about that.

So, why are you saying this?
Favorite Philosopher: Berkeley Location: Toronto
#470947
Lagayascienza

It feels like you have said that before and I have replied to it, and then you came back and I replied again, and we are repeating the cycle. Most of my specific points and counterarguments are yet to be addressed, though. I don’t think we can make much progress that way.

I do believe that biological science owns the burden of figuring out how consciousness is produced. That will also open the door to figuring out how intelligence works. I don’t think philosophy can be excluded from that endeavor.
In my conception of scientific knowledge, it involves having some epistemological and ontological commitments that guide the practice. We have to deal with theoretical frameworks, too, so one needs to have philosophical skills. Anyway, there’s no guarantee that we will figure out consciousness, it is possible that we hit a wall, but no one can say for sure.

I don’t believe that computer science, that is, the science of how we can make computers work, can have any significant say on these matters. It’s a completely different domain. Surely, some hypothesized once that machines running algorithmic processes could be the equivalent of conscious processes in biological machines (just as once people believed that living beings were some sort of clock mechanism), but there’s not one bit of evidence to support that idea, in fact, evidence points to the opposite being true. It’s where the demystyfication of consciousness should start.

I’m sure that computer science and robotics will have a say on how to produce technology that can perform tasks that only conscious living beings can perform now, but that’s no different than any technology that has been produced since there is civilization. We don’t pretend the watermill is smart. Computerized simulations will also be a helpful tool to model physical processes in biological sciences, in the way that models have always been a simplification of a part of reality, to focus on key relations and gain insights. Models are by definition limited, though, and we are not capable of modeling everything in the universe. Models must be fed with the parameters predetermined by researchers, so there must be theoretical approaches and trial and error, you cannot expect them to reproduce reality exactly and let them run for real things to emerge magically, they are conceptual tools. As things get more complex and depend on emergent properties, models are less useful, so you have to get back and enlarge the models. How much you have to enlarge the models to begin to understand consciousness? My guess is that you will need to simulate the whole biological organism. And even after gaining that knowledge, it doesn’t mean we could be able to reproduce it. It feels tempting to say: just give it time, but we don’t know: life and history is contingent. Theoretically, it is possible, because it’s a physical world and we humans deal with it, it’s within our reach, but practically, technically, it remains a big question. We don’t have infinite capabilities.

As for what I think intelligence is, first, let me remind you that for the purpose of this discussion, I’ve been willing to work with any preliminary definition, so that we could move forward from there. Intelligence is not a technical, universal, well-defined term. Nor is consciousness. So we could work with an operational, preliminary definition, to tackle the problems presented in this forum. Unfortunately, this has remained elusive. My own definition would take into account what are the enabling conditions and what are the specific attributes of something that we observe in nature and call “intelligence”, based on a hierarchy of organic states. I will have to elaborate on this in a separate post.
Favorite Philosopher: Umberto Eco Location: Panama
#470950
Count Lucanor wrote:It feels like you have said that before and I have replied to it, and then you came back and I replied again, and we are repeating the cycle. Most of my specific points and counterarguments are yet to be addressed, though. I don’t think we can make much progress that way.
Count Lucanor, Thanks for your reply and apologies if I have missed, or have not been able to fathom, some of the responses you may have given to questions I have previously posed.

Count Lucanor wrote:I do believe that biological science owns the burden of figuring out how consciousness is produced. That will also open the door to figuring out how intelligence works. I don’t think philosophy can be excluded from that endeavor.
Agreed. I think science might be a specialized form of philosophy with particular metaphysical commitments. In only a few centuries it has made progress in a way that philosophy without science could only ever dream of over previous millennia.
Count Lucanor wrote:In my conception of scientific knowledge, it involves having some epistemological and ontological commitments that guide the practice. We have to deal with theoretical frameworks, too, so one needs to have philosophical skills. Anyway, there’s no guarantee that we will figure out consciousness, it is possible that we hit a wall, but no one can say for sure.
Yes. Pretty much all scientists will have ontological and epistemological commitments. It is hard to imagine doing science (and much else) without them.

Count Lucanor wrote:I don’t believe that computer science, that is, the science of how we can make computers work, can have any significant say on these matters. It’s a completely different domain.
I think that there must be some overlap between neuroscience and computer science when we are talking about building intelligent and conscious artificial neural networks. And I think that there is much to be said for reverse-engineering. Reaching a scientific understanding of the principles of bird flight and copying the curved upper surface of bird’s wings which creates a pressure differential, enabled us to build airplanes that outstrip the birds. This would probably have taken much longer had we not had the bird model. For intelligence and consciousness we have the neurological model which is still, arguably, in its early days.
Count Lucanor wrote:Surely, some hypothesized once that machines running algorithmic processes could be the equivalent of conscious processes in biological machines (just as once people believed that living beings were some sort of clock mechanism), but there’s not one bit of evidence to support that idea, in fact, evidence points to the opposite being true.
I think that consciousness and intelligence are separate but probably linked issues. But first, let me say that there is much about living organisms that is, indeed, mechanical. The arm, for example, is a type of lever and pulley arrangement; the details of the circulatory system is governed to some degree by fluid mechanics, etc. Of course, we are not going to build artificial consciousness out of fluid driven leavers and pulleys. The difficulty is in the detail. What happens in natural neural networks does so at the microscopic level and so the details are difficult to study. But not impossible.
Count Lucanor wrote:It’s where the demystyfication of consciousness should start.
I'm not sure what you mean here. Start where? Do you mean that we must start with the idea that intelligence and consciousness depend on the laws of physics and on electro-chemical principles? If so, then I agree. I can see no other place to start. To build artificial consciousness we will need a better understanding of the physical structure and the details of the electro-chemical processes that occur in natural neural networks. And, if human level intelligence depends on consciousness, then we will need to produce consciousness in artificial neural networks if we want them to have human-level intelligence. But only if we’re after human-level intelligence. I can imagine less intelligent AIs that would not need such a high level of consciousness.

Count Lucanor wrote:I’m sure that computer science and robotics will have a say on how to produce technology that can perform tasks that only conscious living beings can perform now, but that’s no different than any technology that has been produced since there is civilization. We don’t pretend the watermill is smart. Computerized simulations will also be a helpful tool to model physical processes in biological sciences, in the way that models have always been a simplification of a part of reality, to focus on key relations and gain insights. Models are by definition limited, though, and we are not capable of modeling everything in the universe.
That is true. But we don’t need to model everything perfectly. Just well enough so that we can build something that does the job we want it to do. And sometimes we find ways of doing things that are better than the solutions evolution came up with. That is how we became able to fly higher and faster than the birds.
Count Lucanor wrote:Models must be fed with the parameters predetermined by researchers, so there must be theoretical approaches and trial and error, you cannot expect them to reproduce reality exactly and let them run for real things to emerge magically, they are conceptual tools. As things get more complex and depend on emergent properties, models are less useful, so you have to get back and enlarge the models. How much you have to enlarge the models to begin to understand consciousness?
With respect to consciousness, quite a lot, I expect.
Count Lucanor wrote:My guess is that you will need to simulate the whole biological organism.
Only if we want to build something that can do all the things that we can do. I think we will need to be less ambitious to begin with.
Count Lucanor wrote:And even after gaining that knowledge, it doesn’t mean we could be able to reproduce it.
True. Technology often has to catch up in order to realize a model and we often have to find different ways of doing things to get the result we are after. And a lot of natural stuff can be ditched. There was no point putting feathers on airplane wings, for example, and if we were developing the male human urinary tract from scratch we wouldn't route the urethra through the prostate. Similarly, I expect there will be things about natural neural networks that won't need to be reproduced exactly and we'll may find ways to improve on nature.
Count Lucanor wrote:It feels tempting to say: just give it time, but we don’t know: life and history is contingent. Theoretically, it is possible, because it’s a physical world and we humans deal with it, it’s within our reach, but practically, technically, it remains a big question. We don’t have infinite capabilities.
Yes, technology would have to catch up with the model. But the reach of our capacities has been extended over history, and enormously so since the advent of modern science. Scientific progress is open-ended and limited only by what we are capable of understanding. We cannot build what we do not understand. There may be a limit to human understanding. I don't know. But I don’t think that limit will be met at creating artificial neural networks that are intelligent and with some level of consciousness.

Count Lucanor wrote:As for what I think intelligence is, first, let me remind you that for the purpose of this discussion, I’ve been willing to work with any preliminary definition, so that we could move forward from there. Intelligence is not a technical, universal, well-defined term. Nor is consciousness. So we could work with an operational, preliminary definition, to tackle the problems presented in this forum. Unfortunately, this has remained elusive. My own definition would take into account what are the enabling conditions and what are the specific attributes of something that we observe in nature and call “intelligence”, based on a hierarchy of organic states. I will have to elaborate on this in a separate post.
Ok. In that case, I propose the following two operational definitions. Feel free to extend or constrain them.

INTELLIGENCE: the capacity for abstraction, logic, understanding, learning, reasoning, planning, creativity, critical thinking, and problem-solving. And the ability to perceive or infer information; and to retain it as knowledge to be applied flexibly to adaptive behaviours within an environment or context.

CONSCIOUSNESS: self-awareness; an awareness of internal and external existence. The ability to think and imagine the counterfactual. (I’m sure this could be extended.)

Looking first at intelligence with the above definition of intelligence in mind, I have two questions.

1.) Can we say that any current so-called AIs can do any or all of the things mentioned in our definition of intelligence?
And,

2.) Must an AI be able to do all of those things to be considered intelligent?


WRT the first question , I think it is clear that current AIs can do some of the things mentioned in our definition of intelligence. They can do logical operations. They can (understand in some sense) what a question is, what the question is asking, and provide answers. They can learn. They can solve mathematical problems. Some of them can perceive information (facial recognition, the data in a date-set for example) and they can retain it as knowledge. I'm not sure any AI does "critical thinking" as we understand the term and I know that they are not yet very flexible.

(I’m writing this quickly off the top of my head and so I’ll enlarge on what I’ve said in the above paragraph once I go back to my sources on artificial intelligence that speak to the things that AIs can currently do. In any case, I think it is clear that, with respect to the first question, AIs can do some of the things mentioned in the proposed definition of intelligence.)

WRT the second question, I am unsure. When it comes to the ability to infer information and to apply knowledge flexibly in new contexts or environments, maybe some level of consciousness will be required. I think human-level consciousness would certainly be required for an artificial neural network to do everything that humans can do, and it would also require some form of embodiment and some sort of sensory array.

Of course, much more could be said but I'll leave the issue of intelligence there for today. As for consciousness, I’m much less sure about the proposed definition, although I think it captures some of what consciousness entails. I hope you can help enlarge or refine the definition.

In any case, I think that to achieve consciousness awareness, an artificial neural network would certainly need to be able to produce models of itself and the external world it inhabits and to imagine changes to those models. The models might be built with “reference frames” (as per Hawkins, for example).

On a general theoretical level, I think it must be possible in theory to build artificial consciousness because our physical brains can produce it in a natural physical substrate. As I’ve said many times, to build consciousness in an artificial substrate we are first going to need a much better model of the way natural neural networks do what they do. However, I doubt that we would need to reproduce natural neurons and human brains and peripheral nervous systems in every minute detail. Artificial consciousness will probably be housed a different neural architecture to ours just as airplanes achieve flight with aeronautical architecture that is different from that of birds.

For me to be right about the above, consciousness must be produced by physical processes in natural neural networks. These are physical systems. If that is not the case, then consciousness must be produced in some other way. There are a variety of theories about this. The brain as antenna tuning into cosmic consciousness, for example. However, there is absolutely zero empirical evidence to support such ideas and there is no way to test them. I therefore discount them. The only other option is some sort of supernatural magic which I also dismiss.

Anyway, that’s all I feel able write today. Thanks for reading. I’ll be interested to read what you think of the definitions proposed, your reaction to my thoughts about intelligence and consciousness, and your answers to the questions I posed in light of the proposed definitions of intelligence and consciousness.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#470974
Pattern-chaser wrote: December 17th, 2024, 9:33 am That's all very well, but this is a topic about AI, not paradoxes. 👍
Empiricist-Bruno wrote: December 19th, 2024, 4:55 pm Ok, and what if AI happened to be a paradox?

In the opening post, there was no mention that the point of view of AI as a paradox needed to be dismissed. No one narrowed down the topic to exclude this view point. The person who created this thread seemed to be genuinely interested in all or any possibility.

Anyway, narrowing down the topic this way would would have meant to exclude the truth from this thread and since you are concerned by the truth, you wouldn't be happy about that.


So, why are you saying this?
We can't discuss Everything simultaneously. Humans can't do that. So we narrow down our discussions. In this case, the OP didn't mention paradoxes just as it didn't mention "marmalade". There didn't seem to be a need to specifically exclude it, along with the gazillions of other things that are also not directly relevant to the discussion.
Favorite Philosopher: Cratylus Location: England
#470975
Gertie wrote: December 18th, 2024, 1:55 pm
And on what basis, what criteria, would you choose what reduction could and could not capture the necessary and sufficient conditions for consciousness?
I said that we may allow some reduction, but by definition no reduction can capture the intricacies of reality. The sufficient and necessary conditions of consciousness do not appear to be so simple.
Gertie wrote: December 18th, 2024, 1:55 pm The thing is, even if we understood every aspect of how a whole physical human body works and how it physically interacts with the environment, Physicalism (currently at least) still wouldn't be able to explain conscious experience. Phenomenal experience wouldn't even be part of that physicalist description. (see Levine's Explanatory Gap).
Wait! Understanding every aspect of how the body works, includes being able to explain conscious experience, since we can’t get around the necessary embodiment of cognition. We don’t know how the body does it, but we know more than enough about the body doing it. Surely, it will be a materialist description, constrained to a physical domain.
Gertie wrote: December 18th, 2024, 1:55 pm So we have no idea what the necessary and sufficient conditions for conscious experience are, or if rocks and toasters and coral have it. Or how far we can reduce the components or interactions in order to retain the necessary and sufficient conditions for conscious experience.
Saying “we have no idea” is going way too far from “we know very little of what we need to know”. There are known unknowns. There’s no dispute now about rocks and toasters not having consciousness, but it’s not an intuition, it just became basic, common knowledge once systematic observations were guided by reason. There’s no dispute in consciousness being a property of living, physical bodies. Those are not part of the unknowns.
Gertie wrote: December 18th, 2024, 1:55 pm ''Experiential states are body states'' is a monist Physicalist theory of mind, not established fact. It could be right. But it still doesn't tell us whether or not other substrates are capable of conscious experience.
Yes, sure, it is a conception rooted in material monism. I will defend material monism, which represents an ontological commitment, with sword and shield, but it also helps having science on your side. The established facts of science can be said to be compatible only with material monism, so if experiential states were not body states, or if anything we said about consciousness was not constrained to the possibilities of physical entities, we would be stepping, necessarily, outside of science, entering the domain of the mystical.
Gertie wrote: December 18th, 2024, 1:55 pm What we do know via human reports is that specific neural interactions in humans manifest specific experiential states. Hence neural correlation is probably our biggest clue in narrowing down the nec and suff conditions.
Not really. What we do know is that experiential states are linked to several type of sensory events from various systems in the agent’s body. Of course, the nervous system participates in all of this, but dismissing everything else to focus on “neural correlations” is a way to favor connectionism and the computational theory of mind.
Gertie wrote: December 18th, 2024, 1:55 pm
A degree of sophistication and complexity might also be necessary. Or coral might be conscious. We assume it's not only because of its physical and behavioural dissimilarity to humans who we believe are conscious. We believe humans are conscious because we are ourselves, and other humans are much like usphysically and behaviourally, and tell us they are too. That's how ignorant we are. That's as far as Physicalism has gotten us, perhaps as far as it can get us (see Chalmers' Hard Problem).
I don’t see clearly what’s the point. Anyway, I’m pretty sure that the idea that only humans are conscious is not widely held in scientific and philosophical circles. That means that at least a considerable amount of humans find clues of consciousness through observation and experimentation, without necessarily resorting to human communication of personal experiences, even though this is also a valid source of information.
Gertie wrote: December 18th, 2024, 1:55 pm (I told you already that I don't find the computing analogy helpful, and why. Per Physicalism, 'computing information' can only be an abstract metaphor. Unless we live in a universe where 'information' is an existing 'thing in itself', which is not Physicalism as it's generally understood.)

Neural networks comprise matter in motion. We assume that a dead brain isn't conscious, so the nature of the configurations seem to be necessary too. If it's the case that coral and fruit flies aren't conscious, then it's likely that more complexity is required, which might include many of the physical features much more complex humans have as they integrate so many more complex neural subsystems.
Again, I find questionable that all cognition is attributed to neural networks in the same sense of artificial neural networks. They are two different things, physically speaking, and that becomes evident when you ask to point to a biological neural network. You’ll be shown a nervous system configured as different anatomical parts and physiological functions. That should explain why they do not produce the same result as an artificial neural network. One does produce cognition, the other doesn’t.
Gertie wrote: December 18th, 2024, 1:55 pm Where-as simply saying it takes a whole human interacting with her environment to manifest conscious experience is both unhelpful and unlikely imo.
It cannot be more unhelpful than dismissing the obvious, basic fact that whole organisms are the ones that experience.
Gertie wrote: December 18th, 2024, 1:55 pm So how do you justify such a 'holistic' position in the face of our ignorance? How do you know AI can't be conscious?
Because we know how AI works. We built it. It’s hardware running a software. We know that’s not how consciousness works, that’s not how living beings work.
Gertie wrote: December 18th, 2024, 1:55 pm
It would be wacky indeed trying to build a machine to replicate something you don’t understand how it works in the first place.
Really? I could build a mouse-trap without understanding its physics.
False analogy. The mechanism of a mouse-trap is very easy to understand without getting into deep physics.
Gertie wrote: December 18th, 2024, 1:55 pm
But that has not been the approach anyway. Ever since Turing, the biophysics of consciousness is irrelevant, what matters is how you can produce something that resembles the observable behavior of conscious (or intelligent) beings. The best shot was any algorithmic process implemented through a machine (from
analog to digital). If you pass that (the Turing test), then you are conscious (or intelligent). This theoretical framework, however, has been shown to be flawed. No GenAI or LLM has one bit of consciousness. The hope that so called “scaling laws” would demonstrate that from bigger computational power, consciousness would emerge, is starting to fade away. GenAI and LLMs have been hitting “the wall”, as was predicted by a few skeptics a couple of years ago. Computation is simply not the path to consciousness.
Maybe. You assert something you can't know. In the face of our ignorance the most reasonable approach is to mimic what we practically can, and see what happens.
I don’t get what is it that I “can’t know”. It’s pretty common knowledge for anyone who has stayed at least a bit well-informed.
Favorite Philosopher: Umberto Eco Location: Panama
#470989
Count Lucanor wrote: December 21st, 2024, 9:32 am
Gertie wrote: December 18th, 2024, 1:55 pm
And on what basis, what criteria, would you choose what reduction could and could not capture the necessary and sufficient conditions for consciousness?
I said that we may allow some reduction, but by definition no reduction can capture the intricacies of reality. The sufficient and necessary conditions of consciousness do not appear to be so simple.
Gertie wrote: December 18th, 2024, 1:55 pm The thing is, even if we understood every aspect of how a whole physical human body works and how it physically interacts with the environment, Physicalism (currently at least) still wouldn't be able to explain conscious experience. Phenomenal experience wouldn't even be part of that physicalist description. (see Levine's Explanatory Gap).
Wait! Understanding every aspect of how the body works, includes being able to explain conscious experience, since we can’t get around the necessary embodiment of cognition. We don’t know how the body does it, but we know more than enough about the body doing it. Surely, it will be a materialist description, constrained to a physical domain.
Gertie wrote: December 18th, 2024, 1:55 pm So we have no idea what the necessary and sufficient conditions for conscious experience are, or if rocks and toasters and coral have it. Or how far we can reduce the components or interactions in order to retain the necessary and sufficient conditions for conscious experience.
Saying “we have no idea” is going way too far from “we know very little of what we need to know”. There are known unknowns. There’s no dispute now about rocks and toasters not having consciousness, but it’s not an intuition, it just became basic, common knowledge once systematic observations were guided by reason. There’s no dispute in consciousness being a property of living, physical bodies. Those are not part of the unknowns.
Gertie wrote: December 18th, 2024, 1:55 pm ''Experiential states are body states'' is a monist Physicalist theory of mind, not established fact. It could be right. But it still doesn't tell us whether or not other substrates are capable of conscious experience.
Yes, sure, it is a conception rooted in material monism. I will defend material monism, which represents an ontological commitment, with sword and shield, but it also helps having science on your side. The established facts of science can be said to be compatible only with material monism, so if experiential states were not body states, or if anything we said about consciousness was not constrained to the possibilities of physical entities, we would be stepping, necessarily, outside of science, entering the domain of the mystical.
Gertie wrote: December 18th, 2024, 1:55 pm What we do know via human reports is that specific neural interactions in humans manifest specific experiential states. Hence neural correlation is probably our biggest clue in narrowing down the nec and suff conditions.
Not really. What we do know is that experiential states are linked to several type of sensory events from various systems in the agent’s body. Of course, the nervous system participates in all of this, but dismissing everything else to focus on “neural correlations” is a way to favor connectionism and the computational theory of mind.
Gertie wrote: December 18th, 2024, 1:55 pm
A degree of sophistication and complexity might also be necessary. Or coral might be conscious. We assume it's not only because of its physical and behavioural dissimilarity to humans who we believe are conscious. We believe humans are conscious because we are ourselves, and other humans are much like usphysically and behaviourally, and tell us they are too. That's how ignorant we are. That's as far as Physicalism has gotten us, perhaps as far as it can get us (see Chalmers' Hard Problem).
I don’t see clearly what’s the point. Anyway, I’m pretty sure that the idea that only humans are conscious is not widely held in scientific and philosophical circles. That means that at least a considerable amount of humans find clues of consciousness through observation and experimentation, without necessarily resorting to human communication of personal experiences, even though this is also a valid source of information.
Gertie wrote: December 18th, 2024, 1:55 pm (I told you already that I don't find the computing analogy helpful, and why. Per Physicalism, 'computing information' can only be an abstract metaphor. Unless we live in a universe where 'information' is an existing 'thing in itself', which is not Physicalism as it's generally understood.)

Neural networks comprise matter in motion. We assume that a dead brain isn't conscious, so the nature of the configurations seem to be necessary too. If it's the case that coral and fruit flies aren't conscious, then it's likely that more complexity is required, which might include many of the physical features much more complex humans have as they integrate so many more complex neural subsystems.
Again, I find questionable that all cognition is attributed to neural networks in the same sense of artificial neural networks. They are two different things, physically speaking, and that becomes evident when you ask to point to a biological neural network. You’ll be shown a nervous system configured as different anatomical parts and physiological functions. That should explain why they do not produce the same result as an artificial neural network. One does produce cognition, the other doesn’t.
Gertie wrote: December 18th, 2024, 1:55 pm Where-as simply saying it takes a whole human interacting with her environment to manifest conscious experience is both unhelpful and unlikely imo.
It cannot be more unhelpful than dismissing the obvious, basic fact that whole organisms are the ones that experience.
Gertie wrote: December 18th, 2024, 1:55 pm So how do you justify such a 'holistic' position in the face of our ignorance? How do you know AI can't be conscious?
Because we know how AI works. We built it. It’s hardware running a software. We know that’s not how consciousness works, that’s not how living beings work.
Gertie wrote: December 18th, 2024, 1:55 pm
It would be wacky indeed trying to build a machine to replicate something you don’t understand how it works in the first place.
Really? I could build a mouse-trap without understanding its physics.
False analogy. The mechanism of a mouse-trap is very easy to understand without getting into deep physics.
Gertie wrote: December 18th, 2024, 1:55 pm
But that has not been the approach anyway. Ever since Turing, the biophysics of consciousness is irrelevant, what matters is how you can produce something that resembles the observable behavior of conscious (or intelligent) beings. The best shot was any algorithmic process implemented through a machine (from
analog to digital). If you pass that (the Turing test), then you are conscious (or intelligent). This theoretical framework, however, has been shown to be flawed. No GenAI or LLM has one bit of consciousness. The hope that so called “scaling laws” would demonstrate that from bigger computational power, consciousness would emerge, is starting to fade away. GenAI and LLMs have been hitting “the wall”, as was predicted by a few skeptics a couple of years ago. Computation is simply not the path to consciousness.
Maybe. You assert something you can't know. In the face of our ignorance the most reasonable approach is to mimic what we practically can, and see what happens.
I don’t get what is it that I “can’t know”. It’s pretty common knowledge for anyone who has stayed at least a bit well-informed.
I think we're too far apart on our basic assumptions about consciousness to make any progress on more detailed issues here. I can't see either of us changing our minds, so I'm going to agree to differ. If there's anything you specifically want me to reply to, I'll have a go.
#471061
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:It feels like you have said that before and I have replied to it, and then you came back and I replied again, and we are repeating the cycle. Most of my specific points and counterarguments are yet to be addressed, though. I don’t think we can make much progress that way.
Count Lucanor, Thanks for your reply and apologies if I have missed, or have not been able to fathom, some of the responses you may have given to questions I have previously posed.

Count Lucanor wrote:I do believe that biological science owns the burden of figuring out how consciousness is produced. That will also open the door to figuring out how intelligence works. I don’t think philosophy can be excluded from that endeavor.
Agreed. I think science might be a specialized form of philosophy with particular metaphysical commitments. In only a few centuries it has made progress in a way that philosophy without science could only ever dream of over previous millennia.
Count Lucanor wrote:In my conception of scientific knowledge, it involves having some epistemological and ontological commitments that guide the practice. We have to deal with theoretical frameworks, too, so one needs to have philosophical skills. Anyway, there’s no guarantee that we will figure out consciousness, it is possible that we hit a wall, but no one can say for sure.
Yes. Pretty much all scientists will have ontological and epistemological commitments. It is hard to imagine doing science (and much else) without them.
I’m glad we have a basic framework we can agree with, so now we can get into the nuances.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:I don’t believe that computer science, that is, the science of how we can make computers work, can have any significant say on these matters. It’s a completely different domain.
I think that there must be some overlap between neuroscience and computer science when we are talking about building intelligent and conscious artificial neural networks.
I think you must understand biology first. Studying software and hardware has nothing to do with it. Then you will move on to replicate the biological system artificially with the appropriate technology. No one will stop the people doing simulations of neural networks, but that is not the technology that will pull it off, they are working on something else away from the problem of consciousness. We don’t know that software algorithms and hardware will have to do with replicating consciousness, so far it seems unlikely. It’s also important to remember that virtual simulations are not real replications. Let’s say we have managed to get a perfect simulation of a hurricane in a computer, that does not mean we are actually reproducing a hurricane artificially. It’s not an artificial hurricane.
Lagayascienza wrote: December 20th, 2024, 4:33 am
And I think that there is much to be said for reverse-engineering. Reaching a scientific understanding of the principles of bird flight and copying the curved upper surface of bird’s wings which creates a pressure differential, enabled us to build airplanes that outstrip the birds. This would probably have taken much longer had we not had the bird model. For intelligence and consciousness we have the neurological model which is still, arguably, in its early days.
From trial and error we managed to get the principles of aerodynamics, which helped the creation of a technology that didn’t replicate the mechanics of bird flight. At the scale of birds, that would not have been so difficult, but humans were actually interested in flying themselves, so it was a different problem. In the same sense, are we trying to reproduce the mechanisms of consciousness or are we building something else that allows us to perform tasks and obtain similar or better results, without actually bothering with what consciousness is, just as we stopped bothering about bird flight to fly ourselves in our own way? The calculator doing the square root of a number is certainly not replicating the same process of a biological mind, so you don’t need a neurological model of a biological brain to make that technology. Unless you were trying to replicate consciousness, such a model would be useless, but then again, software and hardware can’t be the replication technology, it can’t work for reverse-engineering. The biology of consciousness does not work with software running on hardware.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:Surely, some hypothesized once that machines running algorithmic processes could be the equivalent of conscious processes in biological machines (just as once people believed that living beings were some sort of clock mechanism), but there’s not one bit of evidence to support that idea, in fact, evidence points to the opposite being true.
I think that consciousness and intelligence are separate but probably linked issues. But first, let me say that there is much about living organisms that is, indeed, mechanical. The arm, for example, is a type of lever and pulley arrangement; the details of the circulatory system is governed to some degree by fluid mechanics, etc. Of course, we are not going to build artificial consciousness out of fluid driven leavers and pulleys. The difficulty is in the detail. What happens in natural neural networks does so at the microscopic level and so the details are difficult to study. But not impossible.
As organisms, we are built with dynamic physical systems, so as to be considered “mechanical”. There’s no issue with that. That doesn’t mean that any machine with any other internal system can replicate our biological machinery or a part of it. Anyway, computer machines are built unlike our biological machines. Any software running on those computer machines that pretend to simulate, very loosely, connections between neurons, not only falls too short of simulating all the complex mechanisms of a fully conscious entity, it is still just a virtual simulation. It is not doing the physical job to produce consciousness, just as a computational hurricane, which simulates in a virtual model the mechanics of a hurricane by means of encoded instructions written by a software engineer, is not producing a real hurricane. But the computational theory of mind asserts that consciousness is just that: computation.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:It’s where the demystyfication of consciousness should start.
I'm not sure what you mean here. Start where? Do you mean that we must start with the idea that intelligence and consciousness depend on the laws of physics and on electro-chemical principles? If so, then I agree. I can see no other place to start.
You said that we will need science to demystify consciousness. I agree, but that’s why we must start by getting rid of the computational theory of mind (CTM). To be clear, CTM is a philosophical theory, which, as defended by Jerry Fodor, goes like this:

One of the basic philosophical arguments for CTM is that it can make clear how thought and content are causally relevant in the physical world. It does this by saying thoughts are syntactic entities that are computed over: their form makes them causally relevant in just the same way that the form makes fragments of source code in a computer causally relevant. This basic argument may be made more specific in various ways. For example, Allen Newell couched it in terms of the physical symbol hypothesis, according to which being a physical symbol system (a physical computer) is a necessary and sufficient condition of thinking.
Source: Internet Encyclopedia of Philosophy.
Lagayascienza wrote: December 20th, 2024, 4:33 am
To build artificial consciousness we will need a better understanding of the physical structure and the details of the electro-chemical processes that occur in natural neural networks. And, if human level intelligence depends on consciousness, then we will need to produce consciousness in artificial neural networks if we want them to have human-level intelligence. But only if we’re after human-level intelligence. I can imagine less intelligent AIs that would not need such a high level of consciousness.
I could agree with you, except that you use the term “neural network”, that I see as taken directly from the field of computers. We wouldn’t call our cognitive apparatus, with its nervous system, a neural network, if we didn’t want to make it equal to computer simulations of connections between neurons. So, yes we will need to produce consciousness before producing intelligence, but how will we do that? I can’t tell, but it’s not with software in a computer.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:I’m sure that computer science and robotics will have a say on how to produce technology that can perform tasks that only conscious living beings can perform now, but that’s no different than any technology that has been produced since there is civilization. We don’t pretend the watermill is smart. Computerized simulations will also be a helpful tool to model physical processes in biological sciences, in the way that models have always been a simplification of a part of reality, to focus on key relations and gain insights. Models are by definition limited, though, and we are not capable of modeling everything in the universe.
That is true. But we don’t need to model everything perfectly. Just well enough so that we can build something that does the job we want it to do. And sometimes we find ways of doing things that are better than the solutions evolution came up with. That is how we became able to fly higher and faster than the birds.
But notice that I mentioned models in the context of gaining insights on how a natural process works, not in the context of engineering a technical solution for a device we want to build. Of course, computers will be useful for prototyping, but that’s a completely different situation. We don’t want to simulate hurricanes or earthquakes to produce hurricanes and earthquakes ourselves. OTOH, I’m sure that with LLMs and other technologies marvelous things might be built, although not anything that involves consciousness and intelligence.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:Models must be fed with the parameters predetermined by researchers, so there must be theoretical approaches and trial and error, you cannot expect them to reproduce reality exactly and let them run for real things to emerge magically, they are conceptual tools. As things get more complex and depend on emergent properties, models are less useful, so you have to get back and enlarge the models. How much you have to enlarge the models to begin to understand consciousness?
With respect to consciousness, quite a lot, I expect.
Count Lucanor wrote:My guess is that you will need to simulate the whole biological organism.
Only if we want to build something that can do all the things that we can do. I think we will need to be less ambitious to begin with.
Again, I find necessary to separate the need for understanding consciousness, a natural phenomenon, and the process of building something that replicates the natural process. That’s also different from building something that, without reproducing the natural solutions, obtains the same or better results. No one thinks (or at least they shouldn’t think) that a pocket calculator is the replication of a natural process, It gets you the mathematical solutions, and that’s all that really matters for practical purposes. We might not need consciousness and intelligence to build powerful, sophisticated machines that replace the natural way of doing things, and so they will remain as another tool for humans to use.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:And even after gaining that knowledge, it doesn’t mean we could be able to reproduce it.
True. Technology often has to catch up in order to realize a model and we often have to find different ways of doing things to get the result we are after. And a lot of natural stuff can be ditched. There was no point putting feathers on airplane wings, for example, and if we were developing the male human urinary tract from scratch we wouldn't route the urethra through the prostate. Similarly, I expect there will be things about natural neural networks that won't need to be reproduced exactly and we'll may find ways to improve on nature.
That is, more or less, my point. The whole “Design Thinking” approach, now common in the tech world, means focusing on the real problem (achieve flight, for example), rather than offering a preconceived solution (something imitating bird wings) and making that the goal of the project (a false problem). You don’t actually want to develop the male urinary tract, but perhaps you do want to find an alternative way to remove waste products from the body. And stop pretending that this new technology is how the male urinary tract actually works.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:It feels tempting to say: just give it time, but we don’t know: life and history is contingent. Theoretically, it is possible, because it’s a physical world and we humans deal with it, it’s within our reach, but practically, technically, it remains a big question. We don’t have infinite capabilities.
Yes, technology would have to catch up with the model. But the reach of our capacities has been extended over history, and enormously so since the advent of modern science. Scientific progress is open-ended and limited only by what we are capable of understanding. We cannot build what we do not understand. There may be a limit to human understanding. I don't know. But I don’t think that limit will be met at creating artificial neural networks that are intelligent and with some level of consciousness.
But in this case, the limit is not that of human capabilities, but the impossibility of having software as the basis of consciousness. You can’t make apple pie with oranges.
Lagayascienza wrote: December 20th, 2024, 4:33 am
Count Lucanor wrote:As for what I think intelligence is, first, let me remind you that for the purpose of this discussion, I’ve been willing to work with any preliminary definition, so that we could move forward from there. Intelligence is not a technical, universal, well-defined term. Nor is consciousness. So we could work with an operational, preliminary definition, to tackle the problems presented in this forum. Unfortunately, this has remained elusive. My own definition would take into account what are the enabling conditions and what are the specific attributes of something that we observe in nature and call “intelligence”, based on a hierarchy of organic states. I will have to elaborate on this in a separate post.
Ok. In that case, I propose the following two operational definitions. Feel free to extend or constrain them.
OK, that’s nice, but that will require an even longer response. I’ll leave it for my next post.
Favorite Philosopher: Umberto Eco Location: Panama
#471063
Of course AI is intelligent. It can do certain intelligent things. It understands language to some extent and responds appropriately, to some extent. Consider the evolution of LLMs. What is the difference between today's LLMs and the first attempts? Today's LLM's are more intelligent.

It's a limited and specialised intelligence. A slime mould also has limited and specialised intelligence. In time, AI's intelligence will increase, possibly exponentially.
#471076
Language is the invention of intelligence. The question is whether a semantic tool can define and limit the dimensional scope of intelligence. Where are the partitions of language, consciousness and volition? Ultimately any definition seems more like a personal attempt to define our own intelligence for we need a test to find out what is in the other. I am more interested in the source of intelligence and what are its uses. Ultimately, we use intelligence to get what we want or what is defined as a “personal” agenda (should be part of any definition).
What if the personal agenda is to provide a definition of intelligence? Where is the partition of intelligence and the definition of intelligence? Obviously. It is a “paradox” … an infinite loop adding words to the definition; A variable output that depends on environmental variables and constrains; A variable program loop (DNA) using experiential data in two memory systems (DNA and personal memories like the light, the smell, the food) … It is obvious that the smell attracts the other with the food;” ehh… etc.”
Intelligence is the programming of the smell and all other uncovered variables. There is a pattern that the intelligence chases; conscious and unconscious (innocent) chasing. The judging of the chasing; the memory of the judging and the sentence of a definition in an infinite loop adding data and process. What if I hate the smell with no logical (process) explanation. Maybe an (spray) artificial smell will help by providing the thought of artificial in/to the intelligence.
#471254
Lagayascienza
First, I’ll try to go through your definitions to see where we can find common grounds. Then I’ll propose mine. This is all with a spirit of thinking as we write, so we might need corrections along the way.
“Lagayascienza” wrote: INTELLIGENCE: the capacity for abstraction, logic, understanding, learning, reasoning, planning, creativity, critical thinking, and problem-solving. And the ability to perceive or infer information; and to retain it as knowledge to be applied flexibly to adaptive behaviours within an environment or context.
That looks fine, seems correct, but I think we need to separate the cognitive functions in themselves, the physical configurations that allow for those functions, and the external signs of the whole process expressed through the behavior of the agent. I’m not saying that they are not important to consider, but that ultimately, the definition should point to the concrete, specific process. It is important to agree whether we define intelligence as a process in itself or as a general capacity of the agent to behave in certain ways, especially if we’re considering a non-living entity as a potential candidate for housing intelligence. For now I subscribe to the first: a process in itself, so I’ll leave out: problem-solving and applying knowledge. Also, preliminarily, I’ll be cautious with critical thinking, creativity, planning, which look like second-hand operations, more like implemented behaviors. Learning, in particular, seems now to be a feature of all kinds of organisms, even those lacking nervous systems and brains, so I’ll put that aside for a moment. Abstraction, logic, reasoning and processing information do seem like candidates for intelligence.
“Lagayascienza” wrote:
CONSCIOUSNESS: self-awareness; an awareness of internal and external existence. The ability to think and imagine the counterfactual. (I’m sure this could be extended.)
I think this is correct, although it is also redundant. Can’t we define awareness as “being conscious of”? And isn’t thinking and imagining implicit in consciousness? So, perhaps self-awareness, awareness and consciousness are the same thing. But what is it anyway? I will get into that later, but in general, it must be something that can be identified as subjectivity, as some sense of being in the world and “feeling like something”.
“Lagayascienza” wrote:
Looking first at intelligence with the above definition of intelligence in mind, I have two questions.

1.) Can we say that any current so-called AIs can do any or all of the things mentioned in our definition of intelligence?
And,

2.) Must an AI be able to do all of those things to be considered intelligent?


WRT the first question , I think it is clear that current AIs can do some of the things mentioned in our definition of intelligence. They can do logical operations. They can (understand in some sense) what a question is, what the question is asking, and provide answers. They can learn. They can solve mathematical problems. Some of them can perceive information (facial recognition, the data in a date-set for example) and they can retain it as knowledge. I'm not sure any AI does "critical thinking" as we understand the term and I know that they are not yet very flexible.
Making the adjustments to your definition that I mentioned above, it seems obvious that AIs are not a bit intelligent, since they cannot have abstraction, logic and reasoning. They perform other operations unconsciously, following steps instructed by humans, and obviously achieve the results expected by humans, but those are not signs of intelligence in machines, they are signs of human intelligence. For being intelligent, you need to be conscious first, but AIs lack subjectivity.
“Lagayascienza” wrote:
WRT the second question, I am unsure. When it comes to the ability to infer information and to apply knowledge flexibly in new contexts or environments, maybe some level of consciousness will be required. I think human-level consciousness would certainly be required for an artificial neural network to do everything that humans can do, and it would also require some form of embodiment and some sort of sensory array.

Of course, much more could be said but I'll leave the issue of intelligence there for today. As for consciousness, I’m much less sure about the proposed definition, although I think it captures some of what consciousness entails. I hope you can help enlarge or refine the definition.
See my objections to the definition of consciousness above. In any case, in all the tasks you mentioned, for even considering the possibility of intelligence in AI, human intervention should be completely absent. By that I mean many things, such as solving a problem that the AI has never seen before and has not been programmed to perform, nor trained to solve, and without resorting to any stored data.
“Lagayascienza” wrote:
In any case, I think that to achieve consciousness awareness, an artificial neural network would certainly need to be able to produce models of itself and the external world it inhabits and to imagine changes to those models. The models might be built with “reference frames” (as per Hawkins, for example).
I have thought about it, but to have a model is to be able to abstract from the unstructured manifold of experiences (to use Kant’s term) and create a concept. This would mean that experience (the structured sense of the world) implies consciousness, that they are the same thing. If abstraction belongs to the realm of intelligence, these AIs would be intelligent, not only conscious.
“Lagayascienza” wrote:
On a general theoretical level, I think it must be possible in theory to build artificial consciousness because our physical brains can produce it in a natural physical substrate.
That is not disputed, as said before.
“Lagayascienza” wrote:
As I’ve said many times, to build consciousness in an artificial substrate we are first going to need a much better model of the way natural neural networks do what they do. However, I doubt that we would need to reproduce natural neurons and human brains and peripheral nervous systems in every minute detail. Artificial consciousness will probably be housed a different neural architecture to ours just as airplanes achieve flight with aeronautical architecture that is different from that of birds.

For me to be right about the above, consciousness must be produced by physical processes in natural neural networks. These are physical systems.
I still dispute the use of the term “natural neural network”, it does not describe appropriately the biological systems involved in consciousness and intelligence. So, in every sentence where you used that term above, I would replace it with “cognitive system” or something similar.
About the airplanes analogy, there are many ways to fly, and at some point it became clear that it was a matter or aerodynamics and physical laws. In the case of cognition, we haven’t figured out how it works, although we have some clues about some necessary conditions. So, unlike the airplanes, we don’t know if consciousness can be produced in any other way. We know that cephalopods have a different nervous system than those of vertebrates, and they are pretty smart animals, which shows that other routes are possible. One thing to solve is whether the processes of life are necessarily intertwined with those of consciousness or not. I believe that they are and no one has demonstrated that it can be otherwise. I want to make this clear: the existence of neural networks in AI is NOT a sign in that direction.
“Lagayascienza” wrote:
If that is not the case, then consciousness must be produced in some other way. There are a variety of theories about this. The brain as antenna tuning into cosmic consciousness, for example. However, there is absolutely zero empirical evidence to support such ideas and there is no way to test them. I therefore discount them. The only other option is some sort of supernatural magic which I also dismiss.
I agree with you. I cannot support any non-materialist view in the approach to these problems, nor any contribution outside of science and ontological materialism. Mystics cannot produce any valuable insight.
“Lagayascienza” wrote:
Anyway, that’s all I feel able write today. Thanks for reading. I’ll be interested to read what you think of the definitions proposed, your reaction to my thoughts about intelligence and consciousness, and your answers to the questions I posed in light of the proposed definitions of intelligence and consciousness.
OK, I was hoping to include here my own take on the subject of definitions, however, while responding yours and writing mine, not only this post has become too long, but I need to review my draft. I hope you have a little more patience until my next post.
Favorite Philosopher: Umberto Eco Location: Panama
#471273
Thanks, Count Lucanor. Sorry about the delay in responding to your previous post - Christmas got in the way and I'm still working on my response. And thank you for your most recent post. This discussion is certainly making me think and, in order to say anything intelligent, I'm having to do a heap of difficult reading. That, too, is slowing things down.

Might I suggest that, as well as polishing our definitions of "intelligence" and "consciousness", we also give some thought to a definition of the term "computation". It seems to have been a stumbling block for us. I believe brains do actually perform computations but, from what you have said, I take it that you do not think that brains compute. I'm trying to work out whether this disagreement (if it exists) stems from differences in our understanding of the terms "compute" and "computation". If it does, then we may be able to come up with a definition we can agree on and, if we do that, the discussion may proceed more smoothly. One definition I like is that to "compute" is to "manipulate representations". Maybe both both brains and computers do this (albeit, perhaps, somewhat differently).

There is a fairly recent (2022) paper entitled How (and why) to think that the brain is literally a computer by Corey J. Maley in Front. Comput. Sci. 4:970396. doi: 10.3389/fcomp.2022.970396. (Freely available to download) which deals with exactly this, and also with the question of whether the brain is an analogue computer (rather than digital) or whether it is something else entirely. The paper is not very long and not too difficult to read. It would help me if we could discuss it.

Thanks.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
  • 1
  • 27
  • 28
  • 29
  • 30
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Most decisions don't matter. We can be decisive be[…]

Emergence can't do that!!

Are these examples helpful? With those examp[…]

SCIENCE and SCIENTISM

Moreover, universal claims aren’t just unsupp[…]

One way to think of a black hole’s core being blue[…]