Page 3 of 6

Re: AI and the Death of Identity

Posted: April 3rd, 2023, 2:56 pm
by Count Lucanor
psycho wrote: April 2nd, 2023, 3:01 pm
Count Lucanor wrote: March 25th, 2023, 6:43 pm
They are wrong. AI does not and cannot show signs of intelligence, it only simulates behavior that looks like intelligence, but it doesn't understand a thing it does, even though the technology is impressive. Look at the Chinese Room Experiment, which completely refuted such claims.
Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?

If the machine includes a system that checks each model produced against a list of models considered "dangerous" (it is enough to be on the list) and, in cases where there is a coincidence, the machine starts an escape procedure, this could be considered a case of a machine that distinguishes meanings?
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.

Re: AI and the Death of Identity

Posted: April 3rd, 2023, 3:45 pm
by Gertie
psycho wrote: April 3rd, 2023, 12:52 pm
Pattern-chaser wrote: April 3rd, 2023, 6:27 am
psycho wrote: April 2nd, 2023, 3:01 pm Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
My point is whether you consider that a machine can or cannot find meaning in the data it processes.
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.

Re: AI and the Death of Identity

Posted: April 3rd, 2023, 6:31 pm
by psycho
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.

Re: AI and the Death of Identity

Posted: April 3rd, 2023, 7:04 pm
by psycho
Gertie wrote: April 3rd, 2023, 3:45 pm
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
Comparing items in lists is not sophisticated.

The Chinese Room scenario is a failed example that adds nothing to the understanding of the nature of consciousness.

Currently it is assumed that a machine is conscious if it itself can distinguish meanings that concern it. This situation would be noticed according to the reactions of the machine.

This is not very effective.

In my opinion, consciousness is a particular type of neurophysiological structure that occurs in different types of degrees depending on the complexity of the nervous systems.

What is the degree of awareness of a fly, a bird, an octopus, a dog, an elephant or a human?

Do they distinguish meanings?

I have no doubt that my peers are as conscious as I am. That is most likely and I always use most likely.

The two main drawbacks to approach a conceptual model of consciousness are dualism and determining the scope and structure of the "self".

Not everything has the possibility of being conscious. The presence of consciousness has neurophysiological requirements.

Machines can draw conclusions. The question is whether the machines include themselves in these considerations and whether they have programmed appropriate reactions in cases where the conclusion makes it necessary for them to act in some special way.

No. In my scenario, the machine is not aware of its existence or of the model that means a threat to its continuity.
It is just a mechanical process. It synthesizes patterns from the data and whether a pattern corresponds to a base item. puts its wheels in motion.

Re: AI and the Death of Identity

Posted: April 4th, 2023, 8:50 am
by Pattern-chaser
psycho wrote: April 3rd, 2023, 12:52 pm
Pattern-chaser wrote: April 3rd, 2023, 6:27 am
psycho wrote: April 2nd, 2023, 3:01 pm Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
My point is whether you consider that a machine can or cannot find meaning in the data it processes.
Oh. Interesting. 🤔🤔Meaning🤔🤔...

I think it possible that, in time, if it develops that way, an AI might find meaning in data, meaning that it has discovered for itself. If you are asking whether an AI might ever find the same meaning(s) that a human might discover, I suspect not.

Re: AI and the Death of Identity

Posted: April 4th, 2023, 11:09 am
by Gertie
psycho wrote: April 3rd, 2023, 7:04 pm
Gertie wrote: April 3rd, 2023, 3:45 pm
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
Comparing items in lists is not sophisticated.

The Chinese Room scenario is a failed example that adds nothing to the understanding of the nature of consciousness.

Currently it is assumed that a machine is conscious if it itself can distinguish meanings that concern it. This situation would be noticed according to the reactions of the machine.

This is not very effective.

In my opinion, consciousness is a particular type of neurophysiological structure that occurs in different types of degrees depending on the complexity of the nervous systems.

What is the degree of awareness of a fly, a bird, an octopus, a dog, an elephant or a human?

Do they distinguish meanings?

I have no doubt that my peers are as conscious as I am. That is most likely and I always use most likely.

The two main drawbacks to approach a conceptual model of consciousness are dualism and determining the scope and structure of the "self".

Not everything has the possibility of being conscious. The presence of consciousness has neurophysiological requirements.

Machines can draw conclusions. The question is whether the machines include themselves in these considerations and whether they have programmed appropriate reactions in cases where the conclusion makes it necessary for them to act in some special way.

No. In my scenario, the machine is not aware of its existence or of the model that means a threat to its continuity.
It is just a mechanical process. It synthesizes patterns from the data and whether a pattern corresponds to a base item. puts its wheels in motion.
To decide what is most likely you need enough information to judge.    But when it comes to conscious experience we have to go by similarity to known conscious things - starting with ourselves - because consciousness is private.   The less similar, the more uncertainty.  We can look at other species, note how they behave and see how similar their brains are, then make a guess about what, if anything, it's like to be a dog, a bat or a spider. 

Two obvious  comparisons used when it comes to AI are that  firstly  it similarly processes complex information, but we don't know if that is sufficient.  And secondly it has a  dissimilar substrate, and we don't know if an organic brain is necessary.  Neither helps much.

We can build a super complex information processor and devise tests to see if it reacts how we'd expect a conscious being to react. Even potentially give it the kit to learn and evolve without being programmed in ways which would elicit specific responses. But still we'd be comparing it to how we would respond in its shoes, because we don't know what it would be like to be a silicon based super information processor.

Re: AI and the Death of Identity

Posted: April 4th, 2023, 1:28 pm
by psycho
Gertie wrote: April 4th, 2023, 11:09 am
psycho wrote: April 3rd, 2023, 7:04 pm
Gertie wrote: April 3rd, 2023, 3:45 pm
Nobody knows. 

The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't.  Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.

The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.

Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter.  And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those.  Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer. 

What we've been getting by on is inference from analogy.  I know I'm conscious, and because you're so similar to me I assume you're conscious.  We note we our neural activity correlates with our conscious awareness.  We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.

Computers have some functional similarities (most notably in complex processing of information, Searle's point) and we can make them behave in ways which are practically indistinguishable to humans.  But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise.  Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.

Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation.  Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work.  If there is something it is like to be an AI, we can't  assume it will be like being a human.
Comparing items in lists is not sophisticated.

The Chinese Room scenario is a failed example that adds nothing to the understanding of the nature of consciousness.

Currently it is assumed that a machine is conscious if it itself can distinguish meanings that concern it. This situation would be noticed according to the reactions of the machine.

This is not very effective.

In my opinion, consciousness is a particular type of neurophysiological structure that occurs in different types of degrees depending on the complexity of the nervous systems.

What is the degree of awareness of a fly, a bird, an octopus, a dog, an elephant or a human?

Do they distinguish meanings?

I have no doubt that my peers are as conscious as I am. That is most likely and I always use most likely.

The two main drawbacks to approach a conceptual model of consciousness are dualism and determining the scope and structure of the "self".

Not everything has the possibility of being conscious. The presence of consciousness has neurophysiological requirements.

Machines can draw conclusions. The question is whether the machines include themselves in these considerations and whether they have programmed appropriate reactions in cases where the conclusion makes it necessary for them to act in some special way.

No. In my scenario, the machine is not aware of its existence or of the model that means a threat to its continuity.
It is just a mechanical process. It synthesizes patterns from the data and whether a pattern corresponds to a base item. puts its wheels in motion.
To decide what is most likely you need enough information to judge.    But when it comes to conscious experience we have to go by similarity to known conscious things - starting with ourselves - because consciousness is private.   The less similar, the more uncertainty.  We can look at other species, note how they behave and see how similar their brains are, then make a guess about what, if anything, it's like to be a dog, a bat or a spider. 

Two obvious  comparisons used when it comes to AI are that  firstly  it similarly processes complex information, but we don't know if that is sufficient.  And secondly it has a  dissimilar substrate, and we don't know if an organic brain is necessary.  Neither helps much.

We can build a super complex information processor and devise tests to see if it reacts how we'd expect a conscious being to react. Even potentially give it the kit to learn and evolve without being programmed in ways which would elicit specific responses. But still we'd be comparing it to how we would respond in its shoes, because we don't know what it would be like to be a silicon based super information processor.
It is clear to me that I am an individual of a certain species who lives with similar individuals.

I have a heart and my neighbors have hearts. I have a brain and my neighbors have a brain.

Most likely, the organ that generates my consciousness generates consciousness in my neighbors with brains.

Reality does not give any indication that individuals with functional brains equal to mine may not have the functionality that causes my consciousness to occur.

That would not be the most likely.

We know that organic brains have such a structure that allows them to process information and distinguish from it that what is concluded concerns them.

Not just in humans.

But human brains and their perception systems have nothing special that is not found in the rest of reality.

Actually, we don't know what it's like to be in the shoes of our neighbors either.

Re: AI and the Death of Identity

Posted: April 4th, 2023, 10:22 pm
by Count Lucanor
psycho wrote: April 3rd, 2023, 6:31 pm
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.

Re: AI and the Death of Identity

Posted: April 5th, 2023, 12:17 am
by psycho
Count Lucanor wrote: April 4th, 2023, 10:22 pm
psycho wrote: April 3rd, 2023, 6:31 pm
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Why do you say that our experience does not mechanically complete the list?

Why do you say that what you have learned is not the result of an algorithm?

Actually, whether we include patterns that facilitate our survival, either mechanically or by being processed by algorithms, is irrelevant to the scenario that I present.

The question is whether the machine in my example should be considered to have found a meaning that matters to it.

That a machine is programmed does not imply that it could not find meaning in its conclusions. The conclusions result from the information of the data, not from its algorithms. The information used is data from reality.

The way the machine achieves its algorithms does not determine whether or not it can find meaning.

The machines are not subject to evolution until now. For this reason they cannot seriously be considered an analogy with biological beings.

But the possibility of having the ability to find meanings is not determined by the ability to self-reproduce.

The question remains whether in that case, the machine can be said to have found a meaning that pertains to it, given its behavior.

Re: AI and the Death of Identity

Posted: April 5th, 2023, 4:44 am
by AgentSmith
The identity crisis is beginning, as the OP suggests, with the advent of true AI. "Who is Jeff Bezos?" asks Jeff Bezos. The time has come to rally around something other than what we tell ourselves is worth our time and energy. AI, true AI, will, when it happens, reconfigure traditional borders.

Re: AI and the Death of Identity

Posted: April 5th, 2023, 11:07 am
by Count Lucanor
psycho wrote: April 5th, 2023, 12:17 am
Count Lucanor wrote: April 4th, 2023, 10:22 pm
psycho wrote: April 3rd, 2023, 6:31 pm
Count Lucanor wrote: April 3rd, 2023, 2:56 pm
The machines just perform operations as instructed by the programmers. The "dangerous" scenarios are determined by the programmers, the only ones for which these conditions and the criteria are meaningful.
In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Why do you say that our experience does not mechanically complete the list?
What exactly do you mean by "the list"? If you mean a list of coded instructions, then experience and education are not that.
psycho wrote: April 5th, 2023, 12:17 am Why do you say that what you have learned is not the result of an algorithm?
Because it is not. Human experience cannot be reduced to simple linear inputs and outputs.
psycho wrote: April 5th, 2023, 12:17 am Actually, whether we include patterns that facilitate our survival, either mechanically or by being processed by algorithms, is irrelevant to the scenario that I present.
Surely it is relevant, since unconscious objects like computers are not alive, do not think, do not feel, and do not have needs, such as survival instinct.
psycho wrote: April 5th, 2023, 12:17 am The question is whether the machine in my example should be considered to have found a meaning that matters to it.
How can an inanimate object ever think and find meaning? Do you ever consider if your coffee mug finds meaning in your drinking habits? Does a book in your reading habits?
psycho wrote: April 5th, 2023, 12:17 am That a machine is programmed does not imply that it could not find meaning in its conclusions. The conclusions result from the information of the data, not from its algorithms. The information used is data from reality.
The existence of information is a necessary condition, but not a sufficient one, for there to be meaning.
psycho wrote: April 5th, 2023, 12:17 am The way the machine achieves its algorithms does not determine whether or not it can find meaning.
But people confuse the meaning found by programmers as meaning found by their computers, which makes necessary to stress the fact that machines do not obtain meaning, neither from the algorithmic processes the programmers put on them, nor anything else. They just don't have meaning.
psycho wrote: April 5th, 2023, 12:17 am The machines are not subject to evolution until now. For this reason they cannot seriously be considered an analogy with biological beings.
Biological evolution refers to organic processes in living beings. Computers are neither of that, so it is not that they are not subject to evolution until now: they are not subject to biological evolution at all. Their only "evolution" is that of technology, which is a human-controlled process.
psycho wrote: April 5th, 2023, 12:17 am But the possibility of having the ability to find meanings is not determined by the ability to self-reproduce.
Only living beings self-reproduce and find meaning, so if I find something that is not a living being it is quite obvious that it does not self-reproduce and does not find meaning. A computer, for example.
psycho wrote: April 5th, 2023, 12:17 am The question remains whether in that case, the machine can be said to have found a meaning that pertains to it, given its behavior.
The only behaviors found in inanimate objects belong to the domain of physical laws and manipulation from agents. Nothing to do with autonomous actions: meaning and agency.

Re: AI and the Death of Identity

Posted: April 5th, 2023, 1:23 pm
by psycho
Count Lucanor wrote: April 5th, 2023, 11:07 am
psycho wrote: April 5th, 2023, 12:17 am
Count Lucanor wrote: April 4th, 2023, 10:22 pm
psycho wrote: April 3rd, 2023, 6:31 pm

In our case, it is experience (own or by species) and education that make up the list.
Our experience and education are not mechanical or algorithmic processes, we do have meaning.
Why do you say that our experience does not mechanically complete the list?
What exactly do you mean by "the list"? If you mean a list of coded instructions, then experience and education are not that.
psycho wrote: April 5th, 2023, 12:17 am Why do you say that what you have learned is not the result of an algorithm?
Because it is not. Human experience cannot be reduced to simple linear inputs and outputs.
psycho wrote: April 5th, 2023, 12:17 am Actually, whether we include patterns that facilitate our survival, either mechanically or by being processed by algorithms, is irrelevant to the scenario that I present.
Surely it is relevant, since unconscious objects like computers are not alive, do not think, do not feel, and do not have needs, such as survival instinct.
psycho wrote: April 5th, 2023, 12:17 am The question is whether the machine in my example should be considered to have found a meaning that matters to it.
How can an inanimate object ever think and find meaning? Do you ever consider if your coffee mug finds meaning in your drinking habits? Does a book in your reading habits?
psycho wrote: April 5th, 2023, 12:17 am That a machine is programmed does not imply that it could not find meaning in its conclusions. The conclusions result from the information of the data, not from its algorithms. The information used is data from reality.
The existence of information is a necessary condition, but not a sufficient one, for there to be meaning.
psycho wrote: April 5th, 2023, 12:17 am The way the machine achieves its algorithms does not determine whether or not it can find meaning.
But people confuse the meaning found by programmers as meaning found by their computers, which makes necessary to stress the fact that machines do not obtain meaning, neither from the algorithmic processes the programmers put on them, nor anything else. They just don't have meaning.
psycho wrote: April 5th, 2023, 12:17 am The machines are not subject to evolution until now. For this reason they cannot seriously be considered an analogy with biological beings.
Biological evolution refers to organic processes in living beings. Computers are neither of that, so it is not that they are not subject to evolution until now: they are not subject to biological evolution at all. Their only "evolution" is that of technology, which is a human-controlled process.
psycho wrote: April 5th, 2023, 12:17 am But the possibility of having the ability to find meanings is not determined by the ability to self-reproduce.
Only living beings self-reproduce and find meaning, so if I find something that is not a living being it is quite obvious that it does not self-reproduce and does not find meaning. A computer, for example.
psycho wrote: April 5th, 2023, 12:17 am The question remains whether in that case, the machine can be said to have found a meaning that pertains to it, given its behavior.
The only behaviors found in inanimate objects belong to the domain of physical laws and manipulation from agents. Nothing to do with autonomous actions: meaning and agency.
It seems to me that I am not being clear.

I proposed a scenario:

---

"Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?

If the machine includes a system that checks each model produced against a list of models considered "dangerous" (it is enough to be on the list) and, in cases where there is a coincidence, the machine starts an escape procedure, this could be considered a case of a machine that distinguishes meanings?"

---

The information comes from reality outside the machine, the machine was programmed by humans and this extracts patterns and relationships between those patterns.

The "list" contains patterns corresponding to certain scenarios of reality. (it is irrelevant who or what completed the list. It could be a human being or the result of previous machine processing that came to have a certain value)

A witness would see that the machine "considers" (synthesizes patterns from the data that comes from its environment) and that in certain circumstances (those where the pattern extracted from the data and the pattern previously saved in the "list" coincide), the machine it changes its status (moves) with respect to its environment until the equality between the pattern extracted from reality and the pattern saved in the "list" is invalidated.

A witness to that scenario could interpret that the machine seems to understand the threat. That a machine analyzes reality and finds that something means danger to it and acts to avoid it. But it's really just a mechanical system.

To deal with this issue, it seems to me essential to understand what we call meaning.

It is my impression that the crux of the matter is whether an entity is capable of distinguishing meanings that pertain to it and acting accordingly.

What is your definition of "meaning"?

Re: AI and the Death of Identity

Posted: April 6th, 2023, 10:56 am
by Count Lucanor
It seems you are not getting the point. You are involved in a circular argument: you assume a machine performs operations with some implied self-awareness, probably because of a misunderstanding of what processing information implies, and then ask if the task means something to that machine. The answer was given in the Chinese Room Experiment: syntactic procedures do not imply semantic processes.

Re: AI and the Death of Identity

Posted: April 6th, 2023, 12:35 pm
by psycho
Count Lucanor wrote: April 6th, 2023, 10:56 am It seems you are not getting the point. You are involved in a circular argument: you assume a machine performs operations with some implied self-awareness, probably because of a misunderstanding of what processing information implies, and then ask if the task means something to that machine. The answer was given in the Chinese Room Experiment: syntactic procedures do not imply semantic processes.
I am not claiming that the machine in my example finds meaning in the same way as a human. I say how a witness does to discern the difference.

Without knowing how a person determines what a particular piece of information means, it makes no sense to rule out that something non-human could do so.

But what is your definition of meaning?

Re: AI and the Death of Identity

Posted: April 7th, 2023, 1:25 pm
by Count Lucanor
psycho wrote: April 6th, 2023, 12:35 pm
Count Lucanor wrote: April 6th, 2023, 10:56 am It seems you are not getting the point. You are involved in a circular argument: you assume a machine performs operations with some implied self-awareness, probably because of a misunderstanding of what processing information implies, and then ask if the task means something to that machine. The answer was given in the Chinese Room Experiment: syntactic procedures do not imply semantic processes.
I am not claiming that the machine in my example finds meaning in the same way as a human. I say how a witness does to discern the difference.
Unless you can find another way to convey and assimilate meaning, there's no difference to discern. I can't see how you could come up with an abstract definition of "meaning" that completely bypasses the human experience of finding meaning, of making sense, the only one we can use as reference. Whatever you come up with, if it does not refer to that subjective experience, then it will be anything else, perhaps an alternative to meaning, but not meaning per se. Your scenario, for example, is a simple algorithimic, mechanical procedure, where there are no concepts and interpretation. A human account of meaning involves those factors, which are missing in your scenario, among other things.
psycho wrote: April 6th, 2023, 12:35 pm Without knowing how a person determines what a particular piece of information means, it makes no sense to rule out that something non-human could do so.
The concept of meaning or subjective experience may still be fuzzy, but we have a pretty good idea of what sentience implies, in contrast with non-sentient behavior. Some key aspects are perceptual experience and sensation: in a dangerous scenario, a living being actually feels the danger, but in your scenario the device works with a signal, it doesn't experience fear, it doesn't experience at all. It's no different than a clockwork. Comprehension also involves abstract mental representations: the inherent ability to reproduce in your thoughts lived experiences and identify general patterns from particular instances. One can make complex algorithms for machines to simulate this, but that's what it is: a simulation.
psycho wrote: April 6th, 2023, 12:35 pm But what is your definition of meaning?
I would say, risking a novel interpretation: the mental organization of sensations, feelings and thoughts into an abstract model of the world that allows a living organism to cope with the environment, integrating past experiences. It involves attention, intention, memory and self-awareness. Information only becomes information when input from our sensory organs has been assimilated into this mental model.