Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#470650
AI might perceive redness as a wave of amplitude and frequency correlating with the appropriate language. Can AI “think” of the experience as gratifying? What if gratifying is a loop of redness that can be controlled to an optimal point by an intrinsic method? How would AI describe it? ,,, would that be the same as human gratification with the premise that such an evolving loop exists? From 1 to ten….

If the AI only knows the physical information, as with the example of red amplitude of frequency, then it cannot know what redness [qualia/quality] is. It is observing a wavelength of transparent light, and that is its informational reality.
If however, if the red sky at night does have the qualia/quality of redness, and one which can be replicated in terms of the physical wavelength then emitted at the other end e.g. in our visual cortex. The question then becomes; how can it know what the qualia is? Ergo as I see it, there has to be a mind present, because it is only mind that registers qualia!
There are then two kinds of intelligence, one that is being experienced and one that is not, but the latter is doing the same thing – in all other contexts of intelligence. However we can say what our experiences are, write about them and whatnot. Ergo experience itself is a facet of the intellect. Seems to be the same for observation and subjectivity. I guess there will be no true AI poets just mimics.

...unless AI does learn what qualia are and experience them. Should be no different for them as it is for the sky or maybe us. One could grow a human brain and simultaneously manufacture a synthetic imitation – a 3d processor perhaps. What would give one thing life and mind, but not the other thing. Would the cyborg ‘receive’ an ‘observer’ too. Is this something which automatically occurs e.g. if you manufactured an organic wasp cell by cell, would it not just be a living wasp like any other?

_
#470659
amorphos_ii wrote: December 9th, 2024, 5:21 pm
AI might perceive redness as a wave of amplitude and frequency correlating with the appropriate language. Can AI “think” of the experience as gratifying? What if gratifying is a loop of redness that can be controlled to an optimal point by an intrinsic method? How would AI describe it? ,,, would that be the same as human gratification with the premise that such an evolving loop exists? From 1 to ten….

If the AI only knows the physical information, as with the example of red amplitude of frequency, then it cannot know what redness [qualia/quality] is. It is observing a wavelength of transparent light, and that is its informational reality.
If however, if the red sky at night does have the qualia/quality of redness, and one which can be replicated in terms of the physical wavelength then emitted at the other end e.g. in our visual cortex. The question then becomes; how can it know what the qualia is? Ergo as I see it, there has to be a mind present, because it is only mind that registers qualia!
There are then two kinds of intelligence, one that is being experienced and one that is not, but the latter is doing the same thing – in all other contexts of intelligence. However we can say what our experiences are, write about them and whatnot. Ergo experience itself is a facet of the intellect. Seems to be the same for observation and subjectivity. I guess there will be no true AI poets just mimics.

...unless AI does learn what qualia are and experience them. Should be no different for them as it is for the sky or maybe us. One could grow a human brain and simultaneously manufacture a synthetic imitation – a 3d processor perhaps. What would give one thing life and mind, but not the other thing. Would the cyborg ‘receive’ an ‘observer’ too. Is this something which automatically occurs e.g. if you manufactured an organic wasp cell by cell, would it not just be a living wasp like any other?

_
Re: redness, AI will know what humans identify as red, but you'd expect it to also see colours that we can't.

It may take many iterations for AI to achieve sentience, maybe millions of years, just as life operated just fine without sentience for a very long time until it reached a point where sentience was advantageous in some settings. Maybe ti will happen faster, but that seems more iffy.

If different AIs end up competing for the same resources in the far future, there could be an evolutionary "arms race", resulting in a chain of adaptations and counter-adaptations.
#470661
Lagayascienza wrote: December 5th, 2024, 9:36 pm We might, for example, start by thinking about what a neuron is and the electro-chemical processes that occur in networks of neurons. Then we might look at how those processes produce awareness and intelligence. That’s what Harkins and Bennet do and it is that sort of thing I’d like to discuss if we could.

Sure, we can also go to SEP and look at the philosophy around consciousness and intelligence and what it all might mean but, for me, it would be easier to do that once we are on firm ground in respect of what is known and what is yet to be discovered.
Count Lucanor wrote:What is known about what? About intelligence? How will you do that without stating what you mean by intelligence, in the first place? And neurons, why should we be looking at neurons first? That’s probably because you already have a preconception of why and how neurons enter the picture pf intelligence, which is fine, but I already placed a couple of challenges to that, one of them being: we find neurons all over the bodies of organisms, yet it seems that not always involved in processes that fall within what some define as intelligence. The parasympathetic nervous system is a neural network controlling organic processes unconsciously. Does that point to intelligence in action? Surely, we might clear up any doubt if we started by defining what is intelligence.
We know what intelligence is. The usual definitions, such as the one at Wiki, are fine. General intelligence (GI) is what we have. Any artificial neural network that can do all, or much of, what we can do, will be intelligent - it will be an AGI. What is known incontrovertibly about intelligence is that it emerges from the processes that occur in natural neural networks. The focus of such neural networks is the brain. However, our entire neural network is not necessary for consciousness or intelligence. For example, motor areas of the brain can be disconnected from nerves lower down in the body (as occurs in quadriplegia) while consciousness and intelligence remain intact. And much of what goes on unconsciously in the sympathetic nervous system (such as the control of the heart) is similarly irrelevant to consciousness and intelligence. We can be brain-dead and still have a heartbeat, but for consciousness and intelligence we need, at a minimum, a highly functional brain. Put the brain out of action (with general anesthesia, for example) and consciousness and intelligent behavior cease.

What neuroscientists and computer scientists will need in order to produce AGI is a more detailed understanding of how the brain does what it does. I do not think that AGI can be built with chess-playing machines or LLMs. That is just number-crunching and automation and not intelligence. But once we can build artificial neural networks of a similar complexity, connectivity and processing power to the brain, then will we be able to build AGI. We are some way from that now because the focus has been on machines that do only one or a limited number of things for which intelligence is unnecessary. And progress has been slow because the neuroscience and computer folks don't talk much to each other. But that is changing.

We already know quite a lot about our own neural network and the neural networks of other animals. What our brains do is learn, house memory, build models of the world we inhabit, and produce moment to moment updates about our interaction with that world. We know quite a bit already about the brain structures that house memory, and we know that learning involves making new synaptic connections. And we know how data that is fed to the brain via our sensorium enables the updating of our mental models moment to moment.

There is still a lot to learn but once we have a fuller picture of how natural neural networks do what they do, and once the technology to reproduce that in an artificial substrate is further developed, AGI should quickly follow.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#470663
Count Lucanor
I have pointed out that there s not an undifferentiated, homogenous, continuous neural network in living beings, but a complex system organized in different anatomical parts with different functions, including hundreds of different types of neurons, and they operate several organic systems within an organism, which also differ between the whole range of organic forms, from insects to humans. I propose that this is very much unlike the computational neural networks, so there’s something there to talk about and try to understand in physical systems. At least it would help us test your hypothesis that the biological neural networks can be replicated artificially, independently of the fact that we have not reached a definition of what intelligence is.

Sorry to butt in.  It's my (very basic) understanding that  individual neurons across all species can vary in particulars like numbers of specific parts, lengths, coating which change speed of electrical activity and suchlike.  But they are all much the same in terms of what components they have.  Like Lego bricks might have different shapes, but are all the same basic structure.  And as far as we can tell neurons don't contain some special part which other cells don't. 

This strikes me as significant when trying to understand how ''something it is like to be'' conscious experience might manifest in a working brain. It suggests that the configuration and patterns of interactions between cells are the key to conscious experience manifesting, rather than some special component of a neuron. 

If this is the case, then that would support the proposal that a different substrate which can functionally electro-chemically mimic the brain's activity, would manifest conscious experience.  Ie that there could be 'something it is like' to be a machine which functionally mimics organic brain activity. 

Maybe even that certain patterns of interactions of any type of substrate could manifest conscious experience.  And even more speculatively, that any interaction of any matter could.  And perhaps does - we just don't recognise it.

I'd say our ignorance is such, and our inability to confidently infer from what scientific inquiry can (currently) offer, leaves these  questions wide open.
#470672
Lagayascienza wrote: December 10th, 2024, 5:19 am
We know what intelligence is.
Disputable. I have given enough arguments of why it is disputable. Still waiting for an answer.
Lagayascienza wrote: December 10th, 2024, 5:19 amThe usual definitions, such as the one at Wiki, are fine.
But that one from the Wiki appears not to be consistent with other definitions of intelligence you have endorsed here. Which one you will finally pick? Hawkins made his mind: intelligence is for him only human intelligence, the ability that only our human neocortex can provide and make us "the intelligent species". Do you agree with him?
Lagayascienza wrote: December 10th, 2024, 5:19 am General intelligence (GI) is what we have. Any artificial neural network that can do all, or much of, what we can do, will be intelligent - it will be an AGI.
Who is "we"? Humans? If so, then doing what a beetle does, that is not intelligence, right? "An animal doesn't need a neocortex to live a complex life", says Hawkins. And again, there's just too much ambiguity: what do you mean by "what we can do"? Our observable, external behavior? Is it what our cognitive apparatus actually does?
Lagayascienza wrote: December 10th, 2024, 5:19 am What is known incontrovertibly about intelligence is that it emerges from the processes that occur in natural neural networks.
If we care about the wording, that is disputable. It seems you reduce intelligence to neural networks, but can it? What we do know is that organisms with a complex nervous apparatus, comprised of several organs with different functions, can be intelligent. The bodies of all complex animals house lots of systems regulated by nerves, neural networks if you like, yet only the portion related to a neocortex seems to be associated with intelligence, if we adopt that definition of intelligence. That puts a vast amount of neural networks out of the intelligence equation, so how can it be "incontrovertible" that intelligence emerges from neural networks? We could also say that incontrovertibly, intelligence arises from processes that occur in cells. Well, sure, but that doesn't say anything really useful.
Lagayascienza wrote: December 10th, 2024, 5:19 am The focus of such neural networks is the brain.
Why?
Lagayascienza wrote: December 10th, 2024, 5:19 am However, our entire neural network is not necessary for consciousness or intelligence. For example, motor areas of the brain can be disconnected from nerves lower down in the body (as occurs in quadriplegia) while consciousness and intelligence remain intact. And much of what goes on unconsciously in the sympathetic nervous system (such as the control of the heart) is similarly irrelevant to consciousness and intelligence. We can be brain-dead and still have a heartbeat, but for consciousness and intelligence we need, at a minimum, a highly functional brain. Put the brain out of action (with general anesthesia, for example) and consciousness and intelligent behavior cease.
So, you agree that the presence of neural networks does not necessarily guarantee the presence of intelligent processes.
Lagayascienza wrote: December 10th, 2024, 5:19 am What neuroscientists and computer scientists will need in order to produce AGI is a more detailed understanding of how the brain does what it does.
Still lacking a justification of why the brain alone.
Lagayascienza wrote: December 10th, 2024, 5:19 am But once we can build artificial neural networks of a similar complexity, connectivity and processing power to the brain, then will we be able to build AGI.
So far, it is not looking as that being the case. First, it is only a hypothesis that you will get intelligence out of neural networks alone, since they would have to be organized with the complexity of the nervous systems of living organisms. Otherwise, it's like saying that an organism is ultimately reducible to its cells, and that once you have replicated cells from a particular tissue and its functions, you have solved the mystery of how to make an artificial complex organism as a whole: just make a tissue of similar complexity.
Lagayascienza wrote: December 10th, 2024, 5:19 am We are some way from that now because the focus has been on machines that do only one or a limited number of things for which intelligence is unnecessary. And progress has been slow because the neuroscience and computer folks don't talk much to each other. But that is changing.
We are in the wrong path because the mainstream theoretical framework is not the right one. Because we started using a metaphor of cognition in living beings to apply to computer machines, and then we used that as another metaphor to think of our own cognition. Our intelligence is being redefined in terms of the computer technology, assuming the computational theory of mind is true. I don't see any evidence of that currently changing.
Lagayascienza wrote: December 10th, 2024, 5:19 am We already know quite a lot about our own neural network and the neural networks of other animals. What our brains do is learn, house memory, build models of the world we inhabit, and produce moment to moment updates about our interaction with that world. We know quite a bit already about the brain structures that house memory, and we know that learning involves making new synaptic connections. And we know how data that is fed to the brain via our sensorium enables the updating of our mental models moment to moment.

There is still a lot to learn but once we have a fuller picture of how natural neural networks do what they do, and once the technology to reproduce that in an artificial substrate is further developed, AGI should quickly follow.
That's disputable. Quoting Hawkins: "how intelligence arises from cells in your head is still a profound mystery. As more puzzle pieces are collected each year, it sometimes feels as if we are getting further from understanding the brain, not closer".
Favorite Philosopher: Umberto Eco Location: Panama
#470674
Gertie wrote: December 10th, 2024, 8:57 am Count Lucanor
I have pointed out that there s not an undifferentiated, homogenous, continuous neural network in living beings, but a complex system organized in different anatomical parts with different functions, including hundreds of different types of neurons, and they operate several organic systems within an organism, which also differ between the whole range of organic forms, from insects to humans. I propose that this is very much unlike the computational neural networks, so there’s something there to talk about and try to understand in physical systems. At least it would help us test your hypothesis that the biological neural networks can be replicated artificially, independently of the fact that we have not reached a definition of what intelligence is.

Sorry to butt in.  It's my (very basic) understanding that  individual neurons across all species can vary in particulars like numbers of specific parts, lengths, coating which change speed of electrical activity and suchlike.  But they are all much the same in terms of what components they have.  Like Lego bricks might have different shapes, but are all the same basic structure.  And as far as we can tell neurons don't contain some special part which other cells don't. 
First, your basic understanding is wrong. There are hundreds of types of cells, although generally classified within three larger groups when talking about most of the nervous system. When it comes to the brain, it gets even more complicated.

Secondly, I don't know how that goes to my point. Your cognitive apparatus is not made of just a network of neurons, in other words, it cannot be reduced to it. It has different anatomical parts with different functions, so one could say that the neural networks in our bodies need to be organized physiologically in a certain way to operate and produce cognition. Dismissing the importance of that is pure reductionism aimed at facilitating the computational metaphor.
Favorite Philosopher: Umberto Eco Location: Panama
#470679
In a speculative way, I could decide that a worm is born with the capabilities of being a worm fitting the allowed (capacities) intelligence to the worm form. Pub Med has interesting articles about nematodes. Specifically, TGF-beta signaling in parasitic nematodes evolving from the ground environment. Of course, there is an area of research that involves the use of compounds to manipulate TDF signaling. This is called: Artificial TGF-beta signaling… etc… and the cure of arthritis and other diseases of the immune system. How does it work? The most relevant part is that signaling exists. IMO there is a hierarchy of intelligence and of dangerous activities. Is there a comprehensive knowledge of how the worms interact with the ground? Optimistic scientists might construct steel mini excavators with evolving charts in the spectrum of directed evolution.
#470688
Count Lucanor wrote: December 10th, 2024, 1:35 pm
Gertie wrote: December 10th, 2024, 8:57 am Count Lucanor
I have pointed out that there s not an undifferentiated, homogenous, continuous neural network in living beings, but a complex system organized in different anatomical parts with different functions, including hundreds of different types of neurons, and they operate several organic systems within an organism, which also differ between the whole range of organic forms, from insects to humans. I propose that this is very much unlike the computational neural networks, so there’s something there to talk about and try to understand in physical systems. At least it would help us test your hypothesis that the biological neural networks can be replicated artificially, independently of the fact that we have not reached a definition of what intelligence is.

Sorry to butt in.  It's my (very basic) understanding that  individual neurons across all species can vary in particulars like numbers of specific parts, lengths, coating which change speed of electrical activity and suchlike.  But they are all much the same in terms of what components they have.  Like Lego bricks might have different shapes, but are all the same basic structure.  And as far as we can tell neurons don't contain some special part which other cells don't. 
First, your basic understanding is wrong. There are hundreds of types of cells, although generally classified within three larger groups when talking about most of the nervous system. When it comes to the brain, it gets even more complicated.
Would you agree then that the known difference between the 3 broad types of neurons and other cells, which looks relevant to conscious experience, is the ability to transfer 'neurotransmitter' ions via axons and dendrites (with some modifiers). And that neurons associated with hearing, vision, pain, memory, etc aren't apparently significantly different to each other in type.
Secondly, I don't know how that goes to my point. Your cognitive apparatus is not made of just a network of neurons, in other words, it cannot be reduced to it. It has different anatomical parts with different functions, so one could say that the neural networks in our bodies need to be organized physiologically in a certain way to operate and produce cognition.
Sure. I'm trying to get to what makes working neurons different to eg cells directly involved in digestion, which don't manifest conscious experience. This could help us identify whether neurons possess some key ingredient of consciousness, as opposed to the possibility that any cells with similarly interactive configurations could work.

The known notable thing to me about neurons is their role as neurotransmitters, facilitating the flexible transfer of ions within a highly complex interactive system via axons and dendrites.

If an artificial system could do that, then the question would be can we test if that system is conscious.

Dismissing the importance of that is pure reductionism aimed at facilitating the computational metaphor.
No. I don't find metaphors such as 'computation' or 'information processing' helpful here. Information isn't a 'thing in itself' with properties and causal powers which is computed by the brain. Embodied working brains are physical stuff and processes doing presumably physically explainable things.
#470694
Gertie wrote: December 11th, 2024, 7:33 am I'm trying to get to what makes working neurons different to eg cells directly involved in digestion, which don't manifest conscious experience. This could help us identify whether neurons possess some key ingredient of consciousness, as opposed to the possibility that any cells with similarly interactive configurations could work.
When I read this, I thought immediately of a holistic perspective, that might or could suggest that conscious experience is the product of all our cells, not just the ones that are most clearly and obviously involved in it. I wondered if my musing might spark some interest in this discussion?
Favorite Philosopher: Cratylus Location: England
#470701
Pattern-chaser wrote: December 11th, 2024, 9:19 am
Gertie wrote: December 11th, 2024, 7:33 am I'm trying to get to what makes working neurons different to eg cells directly involved in digestion, which don't manifest conscious experience. This could help us identify whether neurons possess some key ingredient of consciousness, as opposed to the possibility that any cells with similarly interactive configurations could work.
When I read this, I thought immediately of a holistic perspective, that might or could suggest that conscious experience is the product of all our cells, not just the ones that are most clearly and obviously involved in it. I wondered if my musing might spark some interest in this discussion?
For sure every particular instantiation of a conscious entity will have an explanation which can be contextualised almost boundlessly. And there are plenty of broad cloth hypotheses about consciousness, which we struggle to choose from because they seem to be untestable. So I wonder how might that help us answer a question like Can AI Be Conscious do you think?

If we want to think about why Entity A is conscious and Entity B isn't (as far as we can tell), then it strikes me a sensible approach could be looking for similarities - and especially differences. To try to narrow down the necessary and sufficient conditions for conscious experience to manifest.
#470703
Gertie wrote: December 11th, 2024, 7:33 am I'm trying to get to what makes working neurons different to eg cells directly involved in digestion, which don't manifest conscious experience. This could help us identify whether neurons possess some key ingredient of consciousness, as opposed to the possibility that any cells with similarly interactive configurations could work.
Pattern-chaser wrote: December 11th, 2024, 9:19 am When I read this, I thought immediately of a holistic perspective, that might or could suggest that conscious experience is the product of all our cells, not just the ones that are most clearly and obviously involved in it. I wondered if my musing might spark some interest in this discussion?
Gertie wrote: December 11th, 2024, 9:43 am For sure every particular instantiation of a conscious entity will have an explanation which can be contextualised almost boundlessly. And there are plenty of broad cloth hypotheses about consciousness, which we struggle to choose from because they seem to be untestable. So I wonder how might that help us answer a question like Can AI Be Conscious do you think?
I don't see a holistic perspective being especially helpful in answering your final question, the one to which this topic is devoted. 😊 But perhaps it might be helpful in answering the question that comes before it?

Like humans do, we are trying to run marathons before we have started crawling. I think our understanding of consciousness, human or otherwise, is close to non-existent. Perhaps a holistic view might help us with this?


Gertie wrote: December 11th, 2024, 9:43 am If we want to think about why Entity A is conscious and Entity B isn't (as far as we can tell), then it strikes me a sensible approach could be looking for similarities - and especially differences. To try to narrow down the necessary and sufficient conditions for conscious experience to manifest.
This is a very analytic approach to a subject that is, perhaps, less suited to that kind of approach? 🤔
Favorite Philosopher: Cratylus Location: England
#470705
Pattern-chaser wrote: December 9th, 2024, 9:48 am Please excuse me to reply in this slightly informal and unusual way, but otherwise it would've sprawled beyond comfortable reading size. 😉
Empiricist-Bruno wrote: December 8th, 2024, 4:49 pm The same thing appears to be happening with AI. Some computer programmers discovered a way to build what appears to be intelligence. <Not "discovered", as though it was there, ready to be found, but "designed". Software designers created a program, or suite of programs, to meet a need.> Just ask a simple calculator for a complex multiplication and it will respond instantly. For us to be able to perform such a task, it would require us much more time and we'd need to apply our intelligence to come up with the right answer. <Mental arithmetic requires rather less than "intelligence", IMO. It's mechanical, even to the point where some of the first calculating machines were mechanical. Starting with Babbage's Difference Engine, and ending with the mechanical marvels I used to use, when I was in school, to do my homework. 😉> Therefore, the calculator appears to be doing something intelligent, through projection. But yet, we know it's not there. There is no intelligence at work there. <👍> So, how are we going to qualify apparently intelligent work when it's done by electronic circuitry? <We could call it "that which resembles intelligence, but actually is just a seeming; a simulation"?> Calling it Artificial Intelligence makes sense because you can't call it projection intelligence because a projection is something abstract and you want to refer to the intelligence of circuitry as a thing. <And if I want to refer to the fairy dust that makes it work, will you let me describe it as magic, even though it is no such thing? Circuitry is not intelligent.> So, the next logical thing to do is to call it artificial intelligence because this helps you to understand that you are not referring to natural intelligence which is what real intelligence is but instead, you are referring to intelligence that comes from human culture and which is based on the things that we construct ourselves. And that's why it's called that way, and not because there is any intent to deceive any one. And there is no one deceived into thinking that there is actual intelligence in the artificial intelligence; we just call it that way because we don't know of any better option.

<I never thought, or wrote, that there was/is any intention to decieve. I wrote only what I believe to be the truth. Software designers tried for years to create something roughly worthy of the title "Artificial Intelligence". Their work was without significant success. It was just too difficult a task. And so the conscious decision was made to try to simulate intelligence, and that is the path that has lead to current AI as we know it. There was no intent to deceive, to the point that AI designers made clear their change of course, and explained it to anyone interested enough to listen. Hence there was and is no deception.>

And we don't know any better option because we generally fail to realize that the intelligence of circuitry is actually a paradox of intelligence. <The "intelligence of circuitry" is a terrible misunderstanding that is too deep, and too misguided, to properly refute here. It does not exist.>
<The "intelligence of circuitry" is a terrible misunderstanding that is too deep, and too misguided, to properly refute here. It does not exist.>

I have somewhat of an issue with your above statement but not because I support in anyway the existence of intelligence of circuitry but because your reply seems to be within the context of a rebuttal of an argument or thing that I have mentioned. When I talk about the intelligence of circuitry, I immediately claim that this intelligence is paradoxical and complain that others often fail to notice that. Now, in my mind, if any other reads this, he/she would immediately realize that I don't believe that intelligence of circuitry is in any way a form of intelligence like say, "emotional intelligence". If I say that the intelligence of circuitry is paradoxical, it means I certainly agree that it does not exist. It's just another way to say that. But here you go, apparently saying that my concept of intelligence of circuitry needs to be refuted as that it simply doesn't exist, which was what I was showing / explaining. And so I am left a bit perplexed. I will present you an example drawn from my own life which may help clarify why paradoxical intelligence isn't intelligence at all:

My dad used to claim that I was indisputably intelligent but that I never used my gifted endowment. Now that's a paradox. An intelligent person that does not use his or her intelligence is an idiot. There's no other way around it. But if the person that does that is intelligent then that intelligence must be paradoxical. That's a simple logical deduction. So, my dad used a paradox to imply that people shouldn't believe in my intelligence as it was paradoxical. It simply wasn't really there, although some might believe I have impressive amount of that. According to my dad, you would need to put a red flag on my intelligence as it wasn't what it appeared to be, according to him. That's from my understanding of paradoxes because he didn't actually say these last complementary sentences but I felt (and still feel) they were implied by him.

Now today, I talk about circuitry intelligence in the same way that my dad used to talk about my intelligence. And in reply, I get to notice that you say that I need to understand that circuitry intelligence doesn't exist. Ok, please explain to me what's your point, or why you think you need to inform me of this. Do you think that my dad thought I was really intelligent? Perhaps he is only half intelligent and didn't realize what he was saying about me?

<I never thought, or wrote, that there was/is any intention to deceive. I wrote only what I believe to be the truth. Software designers tried for years to create something roughly worthy of the title "Artificial Intelligence". Their work was without significant success. It was just too difficult a task. And so the conscious decision was made to try to simulate intelligence, and that is the path that has lead to current AI as we know it. There was no intent to deceive, to the point that AI designers made clear their change of course, and explained it to anyone interested enough to listen. Hence there was and is no deception.>
Pattern-chaser wrote: December 9th, 2024, 9:48 am No, it's not a paradox. It's a deliberate (and transparent) deception. (Current) AIs are implemented to look intelligent, not to be so.

So, when you wrote that it was a "deliberate (and transparent) deception" you were using the wrong term? What you meant is "delusion" because when a deception is transparent what you have is actually an oxymoron or a short paradox, right? No deception is open and transparent about what it is. If it were that way, it would be a confession no?
Also, when you provided clarification about why you mentioned it was a "deception" here:
Pattern-chaser wrote: December 9th, 2024, 9:48 am It is our human implementation of AI that is transparently deceptive. Deceptive because they emulate the appearance of intelligence without actually being intelligent, and transparent (i.e. widely known and appreciated) because no-one denies this, or claims it isn't so (because it is so).
You switched from the use of the term "deception" to "deceptive". So, it seems to me that you have failed to either mention that your initial use of the term "deception" was incorrect and that you meant deceptive. There clearly is a difference between the two terms although they have the same root. What is deceptive can occur naturally, without any intent. On the other hand a deception never occurs that way, to my knowledge.

Pattern-chaser wrote: December 9th, 2024, 9:48 am I wrote only what I believe to be the truth.
Yes, and that makes you such an extraordinary engaging blogger. Keep this up.

The same thing appears to be happening with AI. Some computer programmers discovered a way to build what appears to be intelligence. <Not "discovered", as though it was there, ready to be found, but "designed". Software designers created a program, or suite of programs, to meet a need.>

Here, I find your comment very interesting because I have no idea as to why you are picking on this part of my text. So let me ask you, how would a design not be there, ready to be found? Wasn't calculus found or perhaps it would be better to say about it that it was designed? How does selecting such terminology matters?
Pattern-chaser wrote: December 9th, 2024, 9:48 am<Mental arithmetic requires rather less than "intelligence", IMO. It's mechanical, even to the point where some of the first calculating machines were mechanical. Starting with Babbage's Difference Engine, and ending with the mechanical marvels I used to use, when I was in school, to do my homework. 😉>
Have you ever come across a mental arithmetic champion that was rather unintelligent? My own past online dealings with such a world champion suggest quite the opposite. I think people will say that you don't need to be intelligent to count because many animals can clearly do so and they want to distinguish themselves from animals through their possession of intelligence, which I find racist and offensive.
Pattern-chaser wrote: December 9th, 2024, 9:48 am So, how are we going to qualify apparently intelligent work when it's done by electronic circuitry? <We could call it "that which resembles intelligence, but actually is just a seeming; a simulation"?>
I very much appreciate you attention to detail and attention to all my points and questions. Here, I notice that if we call it the way you suggest, then we have to deal with a margin: If something resembles another but is a simulation of the other you have to answer the question which is which? Which is the thing that imitates the other and which one is being imitated? You then become open to ideas that intelligence may actually come from things (designed circuitry) and not from people. Or maybe intelligence may come from both? And then you have to work on very hard to defend margin between the true intelligence of one and the simulated one of the other. All of that shows that the way to go is clear: call it paradoxical intelligence.
Favorite Philosopher: Berkeley Location: Toronto
#470727
Empiricist-Bruno wrote: December 11th, 2024, 12:36 pm If I say that the intelligence of circuitry is paradoxical, it means I certainly agree that it does not exist.
Then why not just say so? 😉


Empiricist-Bruno wrote: December 11th, 2024, 12:36 pm The same thing appears to be happening with AI. Some computer programmers discovered a way to build what appears to be intelligence. <Not "discovered", as though it was there, ready to be found, but "designed". Software designers created a program, or suite of programs, to meet a need.>

Here, I find your comment very interesting because I have no idea as to why you are picking on this part of my text. So let me ask you, how would a design not be there, ready to be found?
I spent 40 years designing digital electronic hardware and software. "Design" is a subject close to my heart.

We can say, figuratively and artistically, that the design was there, waiting to be found. Just as Michelangelo's David was nestling inside a lump of marble, waiting to be 'released'. But outside such artistic use of language, designs do not sit around waiting to be discovered.
Favorite Philosopher: Cratylus Location: England
#470733
Gertie wrote: December 11th, 2024, 7:33 am
Would you agree then that the known difference between the 3 broad types of neurons and other cells, which looks relevant to conscious experience, is the ability to transfer 'neurotransmitter' ions via axons and dendrites (with some modifiers). And that neurons associated with hearing, vision, pain, memory, etc aren't apparently significantly different to each other in type.
Actually, there are many things associated with conscious experience, and neurons are obviously a key component, although not the only one. Glial cells, for example, don’t function as neurons do, but play a key role. Also, there are lots of neurons doing their job in the cognitive apparatus of complex organisms, including mammals, regulating the operation of several organic systems (such as heart, liver, kidney, etc.), while having nothing to do with conscious experience, So it seems obvious that you need neurons plus a lot of other things to produce consciousness.
Gertie wrote: December 11th, 2024, 7:33 am
Secondly, I don't know how that goes to my point. Your cognitive apparatus is not made of just a network of neurons, in other words, it cannot be reduced to it. It has different anatomical parts with different functions, so one could say that the neural networks in our bodies need to be organized physiologically in a certain way to operate and produce cognition.
Sure. I'm trying to get to what makes working neurons different to eg cells directly involved in digestion, which don't manifest conscious experience. This could help us identify whether neurons possess some key ingredient of consciousness, as opposed to the possibility that any cells with similarly interactive configurations could work.
Interestingly, neurons are not disassociated from digestive processes, since the parasympathetic nervous system controls visceral functions. Why then would you reduce neuronal functions to those of conscious experience?
Gertie wrote: December 11th, 2024, 7:33 am
The known notable thing to me about neurons is their role as neurotransmitters, facilitating the flexible transfer of ions within a highly complex interactive system via axons and dendrites.

If an artificial system could do that, then the question would be can we test if that system is conscious.
I repeat my question: why only consciousness? Neurons are involved in practically all functions of an organism, so, if you managed to replicate any or those functions in a machine, why wouldn’t you just say that it is an organic system? There are nerves in simple organisms. Would you then consider that any organism with neurons, such as anemones and corals, are conscious?

Dismissing the importance of that is pure reductionism aimed at facilitating the computational metaphor.
Gertie wrote: December 11th, 2024, 7:33 am
No. I don't find metaphors such as 'computation' or 'information processing' helpful here. Information isn't a 'thing in itself' with properties and causal powers which is computed by the brain. Embodied working brains are physical stuff and processes doing presumably physically explainable things.
I agree, but I will add: it’s not just a brain cognizing, but a complete organism.
Favorite Philosopher: Umberto Eco Location: Panama
  • 1
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

Two concepts came to mind when reading the above -[…]

Most decisions don't matter. We can be decisive be[…]

Emergence can't do that!!

Are these examples helpful? With those examp[…]

SCIENCE and SCIENTISM

Moreover, universal claims aren’t just unsupp[…]