Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
User avatar
By Count Lucanor
#439327
psycho wrote: April 2nd, 2023, 3:01 pm
Count Lucanor wrote: March 25th, 2023, 6:43 pm
They are wrong. AI does not and cannot show signs of intelligence, it only simulates behavior that looks like intelligence, but it doesn't understand a thing it does, even though the technology is impressive. Look at the Chinese Room Experiment, which completely refuted such claims.
Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?

If the machine includes a system that checks each model produced against a list of models considered "dangerous" (it is enough to be on the list) and, in cases where there is a coincidence, the machine starts an escape procedure, this could be considered a case of a machine that distinguishes meanings?
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
Favorite Philosopher: Umberto Eco Location: Panama
By Gertie
#439328
psycho wrote: April 3rd, 2023, 12:52 pm
Pattern-chaser wrote: April 3rd, 2023, 6:27 am
psycho wrote: April 2nd, 2023, 3:01 pm Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
My point is whether you consider that a machine can or cannot find meaning in the data it processes.
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
User avatar
By psycho
#439337
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
User avatar
By psycho
#439339
Gertie wrote: April 3rd, 2023, 3:45 pm
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
Comparing items in lists is not sophisticated.

The Chinese Room scenario is a failed example that adds nothing to the understanding of the nature of consciousness.

Currently it is assumed that a machine is conscious if it itself can distinguish meanings that concern it. This situation would be noticed according to the reactions of the machine.

This is not very effective.

In my opinion, consciousness is a particular type of neurophysiological structure that occurs in different types of degrees depending on the complexity of the nervous systems.

What is the degree of awareness of a fly, a bird, an octopus, a dog, an elephant or a human?

Do they distinguish meanings?

I have no doubt that my peers are as conscious as I am. That is most likely and I always use most likely.

The two main drawbacks to approach a conceptual model of consciousness are dualism and determining the scope and structure of the "self".

Not everything has the possibility of being conscious. The presence of consciousness has neurophysiological requirements.

Machines can draw conclusions. The question is whether the machines include themselves in these considerations and whether they have programmed appropriate reactions in cases where the conclusion makes it necessary for them to act in some special way.

No. In my scenario, the machine is not aware of its existence or of the model that means a threat to its continuity.
It is just a mechanical process. It synthesizes patterns from the data and whether a pattern corresponds to a base item. puts its wheels in motion.
User avatar
By Pattern-chaser
#439362
psycho wrote: April 3rd, 2023, 12:52 pm
Pattern-chaser wrote: April 3rd, 2023, 6:27 am
psycho wrote: April 2nd, 2023, 3:01 pm Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
My point is whether you consider that a machine can or cannot find meaning in the data it processes.
Oh. Interesting. 🤔🤔Meaning🤔🤔...

I think it possible that, in time, if it develops that way, an AI might find meaning in data, meaning that it has discovered for itself. If you are asking whether an AI might ever find the same meaning(s) that a human might discover, I suspect not.
Favorite Philosopher: Cratylus Location: England
By Gertie
#439379
psycho wrote: April 3rd, 2023, 7:04 pm
Gertie wrote: April 3rd, 2023, 3:45 pm
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
Comparing items in lists is not sophisticated.

The Chinese Room scenario is a failed example that adds nothing to the understanding of the nature of consciousness.

Currently it is assumed that a machine is conscious if it itself can distinguish meanings that concern it. This situation would be noticed according to the reactions of the machine.

This is not very effective.

In my opinion, consciousness is a particular type of neurophysiological structure that occurs in different types of degrees depending on the complexity of the nervous systems.

What is the degree of awareness of a fly, a bird, an octopus, a dog, an elephant or a human?

Do they distinguish meanings?

I have no doubt that my peers are as conscious as I am. That is most likely and I always use most likely.

The two main drawbacks to approach a conceptual model of consciousness are dualism and determining the scope and structure of the "self".

Not everything has the possibility of being conscious. The presence of consciousness has neurophysiological requirements.

Machines can draw conclusions. The question is whether the machines include themselves in these considerations and whether they have programmed appropriate reactions in cases where the conclusion makes it necessary for them to act in some special way.

No. In my scenario, the machine is not aware of its existence or of the model that means a threat to its continuity.
It is just a mechanical process. It synthesizes patterns from the data and whether a pattern corresponds to a base item. puts its wheels in motion.
To decide what is most likely you need enough information to judge.    But when it comes to conscious experience we have to go by similarity to known conscious things - starting with ourselves - because consciousness is private.   The less similar, the more uncertainty.  We can look at other species, note how they behave and see how similar their brains are, then make a guess about what, if anything, it's like to be a dog, a bat or a spider. 

Two obvious  comparisons used when it comes to AI are that  firstly  it similarly processes complex information, but we don't know if that is sufficient.  And secondly it has a  dissimilar substrate, and we don't know if an organic brain is necessary.  Neither helps much.

We can build a super complex information processor and devise tests to see if it reacts how we'd expect a conscious being to react. Even potentially give it the kit to learn and evolve without being programmed in ways which would elicit specific responses. But still we'd be comparing it to how we would respond in its shoes, because we don't know what it would be like to be a silicon based super information processor.
User avatar
By psycho
#439397
Gertie wrote: April 4th, 2023, 11:09 am
psycho wrote: April 3rd, 2023, 7:04 pm
Gertie wrote: April 3rd, 2023, 3:45 pm
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
Comparing items in lists is not sophisticated.

The Chinese Room scenario is a failed example that adds nothing to the understanding of the nature of consciousness.

Currently it is assumed that a machine is conscious if it itself can distinguish meanings that concern it. This situation would be noticed according to the reactions of the machine.

This is not very effective.

In my opinion, consciousness is a particular type of neurophysiological structure that occurs in different types of degrees depending on the complexity of the nervous systems.

What is the degree of awareness of a fly, a bird, an octopus, a dog, an elephant or a human?

Do they distinguish meanings?

I have no doubt that my peers are as conscious as I am. That is most likely and I always use most likely.

The two main drawbacks to approach a conceptual model of consciousness are dualism and determining the scope and structure of the "self".

Not everything has the possibility of being conscious. The presence of consciousness has neurophysiological requirements.

Machines can draw conclusions. The question is whether the machines include themselves in these considerations and whether they have programmed appropriate reactions in cases where the conclusion makes it necessary for them to act in some special way.

No. In my scenario, the machine is not aware of its existence or of the model that means a threat to its continuity.
It is just a mechanical process. It synthesizes patterns from the data and whether a pattern corresponds to a base item. puts its wheels in motion.
To decide what is most likely you need enough information to judge.    But when it comes to conscious experience we have to go by similarity to known conscious things - starting with ourselves - because consciousness is private.   The less similar, the more uncertainty.  We can look at other species, note how they behave and see how similar their brains are, then make a guess about what, if anything, it's like to be a dog, a bat or a spider. 

Two obvious  comparisons used when it comes to AI are that  firstly  it similarly processes complex information, but we don't know if that is sufficient.  And secondly it has a  dissimilar substrate, and we don't know if an organic brain is necessary.  Neither helps much.

We can build a super complex information processor and devise tests to see if it reacts how we'd expect a conscious being to react. Even potentially give it the kit to learn and evolve without being programmed in ways which would elicit specific responses. But still we'd be comparing it to how we would respond in its shoes, because we don't know what it would be like to be a silicon based super information processor.
It is clear to me that I am an individual of a certain species who lives with similar individuals.

I have a heart and my neighbors have hearts. I have a brain and my neighbors have a brain.

Most likely, the organ that generates my consciousness generates consciousness in my neighbors with brains.

Reality does not give any indication that individuals with functional brains equal to mine may not have the functionality that causes my consciousness to occur.

That would not be the most likely.

We know that organic brains have such a structure that allows them to process information and distinguish from it that what is concluded concerns them.

Not just in humans.

But human brains and their perception systems have nothing special that is not found in the rest of reality.

Actually, we don't know what it's like to be in the shoes of our neighbors either.
User avatar
By Count Lucanor
#439424
psycho wrote: April 3rd, 2023, 6:31 pm
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Favorite Philosopher: Umberto Eco Location: Panama
User avatar
By psycho
#439431
Count Lucanor wrote: April 4th, 2023, 10:22 pm
psycho wrote: April 3rd, 2023, 6:31 pm
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Why do you say that our experience does not mechanically complete the list?

Why do you say that what you have learned is not the result of an algorithm?

Actually, whether we include patterns that facilitate our survival, either mechanically or by being processed by algorithms, is irrelevant to the scenario that I present.

The question is whether the machine in my example should be considered to have found a meaning that matters to it.

That a machine is programmed does not imply that it could not find meaning in its conclusions. The conclusions result from the information of the data, not from its algorithms. The information used is data from reality.

The way the machine achieves its algorithms does not determine whether or not it can find meaning.

The machines are not subject to evolution until now. For this reason they cannot seriously be considered an analogy with biological beings.

But the possibility of having the ability to find meanings is not determined by the ability to self-reproduce.

The question remains whether in that case, the machine can be said to have found a meaning that pertains to it, given its behavior.
User avatar
By AgentSmith
#439442
The identity crisis is beginning, as the OP suggests, with the advent of true AI. "Who is Jeff Bezos?" asks Jeff Bezos. The time has come to rally around something other than what we tell ourselves is worth our time and energy. AI, true AI, will, when it happens, reconfigure traditional borders.
User avatar
By Count Lucanor
#439458
psycho wrote: April 5th, 2023, 12:17 am
Count Lucanor wrote: April 4th, 2023, 10:22 pm
psycho wrote: April 3rd, 2023, 6:31 pm
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Why do you say that our experience does not mechanically complete the list?
What exactly do you mean by "the list"? If you mean a list of coded instructions, then experience and education are not that.
psycho wrote: April 5th, 2023, 12:17 am Why do you say that what you have learned is not the result of an algorithm?
Because it is not. Human experience cannot be reduced to simple linear inputs and outputs.
psycho wrote: April 5th, 2023, 12:17 am Actually, whether we include patterns that facilitate our survival, either mechanically or by being processed by algorithms, is irrelevant to the scenario that I present.
Surely it is relevant, since unconscious objects like computers are not alive, do not think, do not feel, and do not have needs, such as survival instinct.
psycho wrote: April 5th, 2023, 12:17 am The question is whether the machine in my example should be considered to have found a meaning that matters to it.
How can an inanimate object ever think and find meaning? Do you ever consider if your coffee mug finds meaning in your drinking habits? Does a book in your reading habits?
psycho wrote: April 5th, 2023, 12:17 am That a machine is programmed does not imply that it could not find meaning in its conclusions. The conclusions result from the information of the data, not from its algorithms. The information used is data from reality.
The existence of information is a necessary condition, but not a sufficient one, for there to be meaning.
psycho wrote: April 5th, 2023, 12:17 am The way the machine achieves its algorithms does not determine whether or not it can find meaning.
But people confuse the meaning found by programmers as meaning found by their computers, which makes necessary to stress the fact that machines do not obtain meaning, neither from the algorithmic processes the programmers put on them, nor anything else. They just don't have meaning.
psycho wrote: April 5th, 2023, 12:17 am The machines are not subject to evolution until now. For this reason they cannot seriously be considered an analogy with biological beings.
Biological evolution refers to organic processes in living beings. Computers are neither of that, so it is not that they are not subject to evolution until now: they are not subject to biological evolution at all. Their only "evolution" is that of technology, which is a human-controlled process.
psycho wrote: April 5th, 2023, 12:17 am But the possibility of having the ability to find meanings is not determined by the ability to self-reproduce.
Only living beings self-reproduce and find meaning, so if I find something that is not a living being it is quite obvious that it does not self-reproduce and does not find meaning. A computer, for example.
psycho wrote: April 5th, 2023, 12:17 am The question remains whether in that case, the machine can be said to have found a meaning that pertains to it, given its behavior.
The only behaviors found in inanimate objects belong to the domain of physical laws and manipulation from agents. Nothing to do with autonomous actions: meaning and agency.
Favorite Philosopher: Umberto Eco Location: Panama
User avatar
By psycho
#439466
Count Lucanor wrote: April 5th, 2023, 11:07 am
psycho wrote: April 5th, 2023, 12:17 am
Count Lucanor wrote: April 4th, 2023, 10:22 pm
psycho wrote: April 3rd, 2023, 6:31 pm

In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Why do you say that our experience does not mechanically complete the list?
What exactly do you mean by "the list"? If you mean a list of coded instructions, then experience and education are not that.
psycho wrote: April 5th, 2023, 12:17 am Why do you say that what you have learned is not the result of an algorithm?
Because it is not. Human experience cannot be reduced to simple linear inputs and outputs.
psycho wrote: April 5th, 2023, 12:17 am Actually, whether we include patterns that facilitate our survival, either mechanically or by being processed by algorithms, is irrelevant to the scenario that I present.
Surely it is relevant, since unconscious objects like computers are not alive, do not think, do not feel, and do not have needs, such as survival instinct.
psycho wrote: April 5th, 2023, 12:17 am The question is whether the machine in my example should be considered to have found a meaning that matters to it.
How can an inanimate object ever think and find meaning? Do you ever consider if your coffee mug finds meaning in your drinking habits? Does a book in your reading habits?
psycho wrote: April 5th, 2023, 12:17 am That a machine is programmed does not imply that it could not find meaning in its conclusions. The conclusions result from the information of the data, not from its algorithms. The information used is data from reality.
The existence of information is a necessary condition, but not a sufficient one, for there to be meaning.
psycho wrote: April 5th, 2023, 12:17 am The way the machine achieves its algorithms does not determine whether or not it can find meaning.
But people confuse the meaning found by programmers as meaning found by their computers, which makes necessary to stress the fact that machines do not obtain meaning, neither from the algorithmic processes the programmers put on them, nor anything else. They just don't have meaning.
psycho wrote: April 5th, 2023, 12:17 am The machines are not subject to evolution until now. For this reason they cannot seriously be considered an analogy with biological beings.
Biological evolution refers to organic processes in living beings. Computers are neither of that, so it is not that they are not subject to evolution until now: they are not subject to biological evolution at all. Their only "evolution" is that of technology, which is a human-controlled process.
psycho wrote: April 5th, 2023, 12:17 am But the possibility of having the ability to find meanings is not determined by the ability to self-reproduce.
Only living beings self-reproduce and find meaning, so if I find something that is not a living being it is quite obvious that it does not self-reproduce and does not find meaning. A computer, for example.
psycho wrote: April 5th, 2023, 12:17 am The question remains whether in that case, the machine can be said to have found a meaning that pertains to it, given its behavior.
The only behaviors found in inanimate objects belong to the domain of physical laws and manipulation from agents. Nothing to do with autonomous actions: meaning and agency.
It seems to me that I am not being clear.

I proposed a scenario:

---

"Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?

If the machine includes a system that checks each model produced against a list of models considered "dangerous" (it is enough to be on the list) and, in cases where there is a coincidence, the machine starts an escape procedure, this could be considered a case of a machine that distinguishes meanings?"

---

The information comes from reality outside the machine, the machine was programmed by humans and this extracts patterns and relationships between those patterns.

The "list" contains patterns corresponding to certain scenarios of reality. (it is irrelevant who or what completed the list. It could be a human being or the result of previous machine processing that came to have a certain value)

A witness would see that the machine "considers" (synthesizes patterns from the data that comes from its environment) and that in certain circumstances (those where the pattern extracted from the data and the pattern previously saved in the "list" coincide), the machine it changes its status (moves) with respect to its environment until the equality between the pattern extracted from reality and the pattern saved in the "list" is invalidated.

A witness to that scenario could interpret that the machine seems to understand the threat. That a machine analyzes reality and finds that something means danger to it and acts to avoid it. But it's really just a mechanical system.

To deal with this issue, it seems to me essential to understand what we call meaning.

It is my impression that the crux of the matter is whether an entity is capable of distinguishing meanings that pertain to it and acting accordingly.

What is your definition of "meaning"?
User avatar
By Count Lucanor
#439546
It seems you are not getting the point. You are involved in a circular argument: you assume a machine performs operations with some implied self-awareness, probably because of a misunderstanding of what processing information implies, and then ask if the task means something to that machine. The answer was given in the Chinese Room Experiment: syntactic procedures do not imply semantic processes.
Favorite Philosopher: Umberto Eco Location: Panama
User avatar
By psycho
#439561
Count Lucanor wrote: April 6th, 2023, 10:56 am It seems you are not getting the point. You are involved in a circular argument: you assume a machine performs operations with some implied self-awareness, probably because of a misunderstanding of what processing information implies, and then ask if the task means something to that machine. The answer was given in the Chinese Room Experiment: syntactic procedures do not imply semantic processes.
I am not claiming that the machine in my example finds meaning in the same way as a human. I say how a witness does to discern the difference.

Without knowing how a person determines what a particular piece of information means, it makes no sense to rule out that something non-human could do so.

But what is your definition of meaning?
User avatar
By Count Lucanor
#439633
psycho wrote: April 6th, 2023, 12:35 pm
Count Lucanor wrote: April 6th, 2023, 10:56 am It seems you are not getting the point. You are involved in a circular argument: you assume a machine performs operations with some implied self-awareness, probably because of a misunderstanding of what processing information implies, and then ask if the task means something to that machine. The answer was given in the Chinese Room Experiment: syntactic procedures do not imply semantic processes.
I am not claiming that the machine in my example finds meaning in the same way as a human. I say how a witness does to discern the difference.
Unless you can find another way to convey and assimilate meaning, there's no difference to discern. I can't see how you could come up with an abstract definition of "meaning" that completely bypasses the human experience of finding meaning, of making sense, the only one we can use as reference. Whatever you come up with, if it does not refer to that subjective experience, then it will be anything else, perhaps an alternative to meaning, but not meaning per se. Your scenario, for example, is a simple algorithimic, mechanical procedure, where there are no concepts and interpretation. A human account of meaning involves those factors, which are missing in your scenario, among other things.
psycho wrote: April 6th, 2023, 12:35 pm Without knowing how a person determines what a particular piece of information means, it makes no sense to rule out that something non-human could do so.
The concept of meaning or subjective experience may still be fuzzy, but we have a pretty good idea of what sentience implies, in contrast with non-sentient behavior. Some key aspects are perceptual experience and sensation: in a dangerous scenario, a living being actually feels the danger, but in your scenario the device works with a signal, it doesn't experience fear, it doesn't experience at all. It's no different than a clockwork. Comprehension also involves abstract mental representations: the inherent ability to reproduce in your thoughts lived experiences and identify general patterns from particular instances. One can make complex algorithms for machines to simulate this, but that's what it is: a simulation.
psycho wrote: April 6th, 2023, 12:35 pm But what is your definition of meaning?
I would say, risking a novel interpretation: the mental organization of sensations, feelings and thoughts into an abstract model of the world that allows a living organism to cope with the environment, integrating past experiences. It involves attention, intention, memory and self-awareness. Information only becomes information when input from our sensory organs has been assimilated into this mental model.
Favorite Philosopher: Umberto Eco Location: Panama

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

If one's ailment is not physical, it's unrealistic[…]

SCIENCE and SCIENTISM

I think you're using term 'universal' a littl[…]

Emergence can't do that!!

Are we now describing our map, not the territory[…]

“The charm quark is an elementary particle found i[…]