psycho wrote: ↑April 3rd, 2023, 12:52 pm
Pattern-chaser wrote: ↑April 3rd, 2023, 6:27 am
psycho wrote: ↑April 2nd, 2023, 3:01 pm
Hypothetical question:
A machine calculates models from the information of certain data.
In one of those cases, the information of that data corresponds to its imminent destruction.
Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
My point is whether you consider that a machine can or cannot find meaning in the data it processes.
Nobody knows.
The Chinese Room shows that properly kitted out computers can do things which give the impression they'd need to understand what they're doing, but actually don't. Increasingly sophisticated chatbots are designed to respond in human-like ways, without having conscious experience.
The problem is we don't understand the mind body relationship, or what might in principle make a machine conscious, or if it's impossible. We don't know the necessary and sufficient conditions, and we don't know if the nature of the substrate plays a necessary role.
Furthermore, consciousness isn't third party testable, we don't have a consciousness-o-meter. And because we don't even know the necessary and sufficient conditions for conscious experience (which is required for 'meaning') we can't even test for those. Whether it's a toaster, a daffodil, a fellow human being or a super-duper computer.
What we've been getting by on is inference from analogy. I know I'm conscious, and because you're so similar to me I assume you're conscious. We note we our neural activity correlates with our conscious awareness. We look at chimps with similarish brains and behaviours, and assume they are conscious. At some point we become less confident about other species being conscious, as the similarities diminish.
Computers have some
functional similarities (most notably in complex processing of information, Searle's point) and we can make them
behave in ways which are practically indistinguishable to humans. But we can't test them for consciousness, we can't test for the necessary and sufficient conditions, and we don't keven know if machine consciousness would be similar enough to ours to recognise. Like we don't know if daffodils and toasters are conscious in ways so dissimilar to ours we don't recognise it.
Your test assumes the AI would have the ability to consciously choose, and choose the option a person might choose based on evolved human instincts and values of self-preservation. Such a concept might mean nothing to a computer not moulded by survival instincts, so looking for such similarity might not work. If there is something it is like to be an AI, we can't assume it will be like being a human.