Page 4 of 8

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 28th, 2022, 8:30 am
by Sculptor1
GrayArea wrote: December 28th, 2022, 5:00 am
Sculptor1 wrote: December 26th, 2022, 7:29 am
Only relevant bits.
AI is not "aware".
Yes, we know that current A.Is are not aware. You really don't have to repeat yourself over and over again. But the premise here is that A.I "will" be aware. Of course, you can feel free to give your arguments on that topic as I have been giving my own. But I have quite literally specified it in the title, "Will Sentient A.I be more altruistic than selfish?" instead of "Will current A.I be more altruistic than selfish".
But there is no prospect of awareness in AI.
None.
And since an aware AI would have to be the result of a completely different technology then it is not possible to ask your question.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 28th, 2022, 4:27 pm
by GrayArea
Sculptor1 wrote: December 28th, 2022, 8:30 am
GrayArea wrote: December 28th, 2022, 5:00 am
Sculptor1 wrote: December 26th, 2022, 7:29 am
Only relevant bits.
AI is not "aware".
Yes, we know that current A.Is are not aware. You really don't have to repeat yourself over and over again. But the premise here is that A.I "will" be aware. Of course, you can feel free to give your arguments on that topic as I have been giving my own. But I have quite literally specified it in the title, "Will Sentient A.I be more altruistic than selfish?" instead of "Will current A.I be more altruistic than selfish".
But there is no prospect of awareness in AI.
None.
And since an aware AI would have to be the result of a completely different technology then it is not possible to ask your question.
In that case, you can try to provide an argument as to why there is no prospect of awareness in A.I, why we can never artificially replicate a brain and give it sentience.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 28th, 2022, 7:39 pm
by Sculptor1
GrayArea wrote: December 28th, 2022, 4:27 pm In that case, you can try to provide an argument as to why there is no prospect of awareness in A.I, why we can never artificially replicate a brain and give it sentience.
Funny.
Tell me how you think an AI could be altruistic or selfish!
You might as well ask me to justify why a car cannot be sentient like Chitty-Chitty Bang-Bang.
The only thing we know about awareness is that each of us can know we have it, bt we cannot even tall if other humans or animals have it too.

Computer systems are not brain replicas.
I do wonder if the future might actually integrate biological systems in their networks, but we are not doing anything like that.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 29th, 2022, 12:38 am
by GrayArea
Sculptor1 wrote: December 28th, 2022, 7:39 pm Tell me how you think an AI could be altruistic or selfish!
I talked to you about it already. You just didn't read it. What you just said only strengthens my suspicion that when you read something that you disagree with, you just skip it and call it a day. It explains how you almost never get anything out of my replies and keep repeating the same old ideas over and over again.
Sculptor1 wrote: December 28th, 2022, 7:39 pm You might as well ask me to justify why a car cannot be sentient like Chitty-Chitty Bang-Bang.
What?
Sculptor1 wrote: December 28th, 2022, 7:39 pm The only thing we know about awareness is that each of us can know we have it, bt we cannot even tall if other humans or animals have it too.
Yes, though that's only because we are trapped inside our own awareness of the world. And that still doesn't mean the world doesn't exist, or else we would have nothing to be aware of. And so, if the world does exist, then it sure can produce awareness like it produced yours. By definition, "your existence" and "the world" are not mutually exclusive.
Sculptor1 wrote: December 28th, 2022, 7:39 pm Computer systems are not brain replicas.
Not yet, because the current computer systems aren't made after the brain's structure to begin with. First we'd have to know more about the brain itself.
Sculptor1 wrote: December 28th, 2022, 7:39 pm I do wonder if the future might actually integrate biological systems in their networks, but we are not doing anything like that.
Who's "we" and how are you so sure?

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 30th, 2022, 11:19 am
by Gertie
GrayArea wrote: December 24th, 2022, 3:29 am
Sculptor1 wrote: December 22nd, 2022, 7:33 pm
GrayArea wrote: December 18th, 2022, 8:39 pm
Count Lucanor wrote: December 18th, 2022, 7:59 pm

That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
You are right in that there has been no sentient A.I so far that feels anything. I just think that this has less to do with where we are right now, but more to do with how far we are willing to go.
AI will be neither of these things, and it would be an abuse of language to so describe it.

An AI will operate within the parameters of its programming. It will never act in the interests of itself or of humans. It will do what the code instructs it to do with no purpose.
Until you understand this simple fact you will never fully understand what an AI is or how it operates.

Whilst such actions might appear altruistic or selfish; and whilst that might be to the advantage or disadvantage of humans none of these acts will be performed with any intents in "mind" since an AI has no mind.

If you think in terms of altruism or selfishness then you are just anthropomorphizing. Primitieve people have always done this about the weather, the great mountain, the cruel sea, and is the basis of most religions. But just like religions is it a delusion.
I don't think you understand what kind of A.I I'm talking about. I'm talking about the kind of A.I that has not been made yet, the kind that is specifically modeled after the human brain. The kind of A.I that is sentient and therefore has a mind, so to speak. On the other hand, the A.I you describe is the kind of A.I that does not have a will of its own nor sentience. When it comes to those, you are right that we are the ones that have to program every single one of their actions.

But if we were to build a sentient A.I, then realistically the only programming required to make it would be the programming that "creates" the brain, not the programming that defines every single one of its actions, because sentient beings can cause its own actions without us having to define them. Which means that the only boundaries that the parameters of its programming would set are the boundaries of "what is its brain and what isn't" a.k.a boundaries of its awareness, not the boundaries of "what kind of actions it can perform or not".

It is correct to not anthropomorphize non-sentient A.Is as they are essentially inanimate objects and the dualism of the "self" and the "external world" do not apply to them. On the other hand, it would be correct to anthropomorphize sentient A.Is as they will have a sense of "self", which then automatically creates its "external world". And as long as the A.I is aware of these two, then it will be aware of how its actions will affect either its "self" or the "external world"—which we can either call "selfish" or "altruistic". Also, it may or may not have a preference between the two.

No offense, but I don't think I have to "understand what an AI is or how it operates". It's clear that you and I are referring to different types of A.Is—I have already stated prior to making this reply that the kind of A.I that I am talking about is the kind of A.I that is sentient and has a mind. You have to understand what kind of A.I I am talking about and how that operates.


Anyway, before you pull out words like "Primitive" or "Delusional", I suggest you actually read sentences that you're quoting. It would save you and me a lot of time.
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
There are some obvious ways to go imo. One is to programme a computer to replicate neural connectivity, the Human Connectome Project is working on mapping the human brain but as it's the most complex thing in the universe we've encountered it's an unimaginably massive task. It potentially offers the Black Mirror scenario of down-loading your own consciousness and never dying as such. If that pans out, then presumably the mind would have the same human traits like altruism and selfishness. And potentially if you artificially augmented altruistric connectivity in there you'd get a more altruistic mind. If you bumped up the intelligence quota you'd get a more intelligent mind,etc. It would be like a designer baby, but to get an AI smarter or more altruistic than possible for a human, you'd be tinkering with the circuitry in unpredictable ways, because of the incredibly complex interactivity of the circuitry.

Another way would be to build a self-learing robot, with the ability to access and process huge amounts of information until it hit whatever threshold might exist to spark conscious experience. We'd have no way of predicting what it would be like to be such a differently 'evolved' mind. To assume it could even conceptualise itself as a 'self', a being existing independantly of the information it processes, would be a guess. Anthropomorphising such a being would be a mistake, and we might not even have the language or concepts to understand what it would be like. Nagel points out that what it is like to be a bat with sonar is unknowable to us, and here the difficulty of comparison might be beyond conceivability. Unless we somehow programme in behaviours we recognise as 'altruistic', 'willed', etc. It would be a step into the dark with no access to a light switch. 

Transhumanism might be another way to go. You can imagine replacing parts of brain circuitry with enhanced silicon parts, perhaps even the whole brain. And if the lights stayed on, you'd have a human-like minded AI.


But again, remember these scenarios make the assumption that simply mimicking substrate independent functionality (complex, inter-connective information processing) would provide the necessary and sufficient conditions for consciousness. We don't know if there's something about organic electro-chemical cellular brains which is necessary for consciousness, because we don't understand the mind-body relationship. For example Penrose and Hameroff's Orch OR theory suggests microtubules in neurons play a key role, where-as Tononi and Koch's IIT theory suggests the information processing function is sufficient (possibly implying a panpsychism where current computers, toasters, daffodils, rocks and particles have some form consciousness already, we just don't recognise it because it's so dissimilar to our own).

Which, if any are on the right track? Nobody knows. The mind-body relationship has implications for the most fundamental nature of reality. Anybody who thinks they do know, doesn't grasp Chalmers' Hard Problem. We don't even know enough to be able to reliably test an AI for consciousness, we don't even know enough to know if each other are conscious - it's all inference from similarity when you get down to it.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 30th, 2022, 12:55 pm
by Gertie
GrayArea wrote: December 27th, 2022, 1:02 am
Stoppelmann wrote: December 25th, 2022, 6:47 am
1. Philosophers often use the term ‘qualia’ to refer to the introspectively accessible, phenomenal aspects of our mental lives. There is quite a discussion to follow regarding qualia online if one has a look, and if we think about the central role our brains have in processing the stimuli that our senses give it, there is a real danger of misconception. This is especially obvious when we look at the survival aspects of identifying danger, and the example of taking a piece of rope to be a snake, showing our minds to be biased towards dangers that often are no longer real in our lives, but inherited from ancestors. We have seen this in patients with dementia and even in some cases in which people seem to be lethargic, when cognitive disabilities are dominant.

This means that awareness is something that needs to be trained, primarily in distinguishing reality in our perceptions from misinterpretations, which requires a degree of equanimity to perform correctly. Rushed or stressed people regularly misinterpret qualia, and unquestioning reliance on our conscious experience breaks down at some point, which experts in mindfulness are trained to expect. There are numerous examples of people who through accident, illness, or intention, lose the ability to interpret their perceptions in the standard way humans do, and they report seeing a world that is vastly different, in which objects are difficult to distinguish from their surroundings, and inexplicable things occur, which they cannot put into language. This is sometimes the experience that leads to extreme mental disorders and an inability to cope in everyday life. It appears that the sorting the brain undertakes is geared towards survival and the clear distinction of objects.

So, the illusion of objects isn’t that the objects are not there, but they are not what they appear to be, and are instead vibrating forms or patterns that vibrate at a different frequency to other forms, which causes them to attract or repel depending on the vibration frequency. Your jelly and finger have varying frequencies and therefore the reaction occurs as you described. It is possible for people whose brains no longer distinguishes the objects in a normal way to see this vibration, although what they see then has often been said to beyond language.
I also believe that Qualia isn’t “whatever the objective reality contains” that we try to grasp as accurately as possible and sometimes fail to do so. Instead, it should be “whatever we grasp OF whatever the objective reality contains”.

And I think I have a basic idea as to how Qualia "could occur" which then would result in having these qualities that you and I have mentioned above.

Consider the color Red for example. Obviously "Redness" is something only we can perceive, and it is not a part of the physical world. One could say that Redness is how the optical neurons translate that "red" frequency part of light into their own language, much like how the "jelly" translates the "finger", where the frequency itself is something they can translate only using what materials and functions they are made out of. For them to translate the frequency would be for them to physically react to the lightwave. Solely from the subjective perspective of an object, its external impulse(its actual, objective, physical form) can be subjectively translated into "the things it causes within the object from the object's perspective".

The last two words "object's perspective" holds a lot of significance though. If the red lightwave causes a few action potentials here and there, and makes neurons fire some neurotransmitters etc, then since that is basically what the red lightwave causes within the neurons, are those physical actions literally equal to the color red?

No. It's not equal, because we're looking at all this from a third-person perspective and not from the object's perspective. We cannot explain from an objective point of view (that is, through third-person observations), how objects subjectively translate external impulses. Through observations we can only know how the object interacts with the external impulse, not what its interaction "means" to the object a.k.a Qualia. However, we can still see how the outputs of objects differ after interacting with various different external impulses. For the neurons' case, the output would be neurotransmitters. A group of neurons seem to release their neurotransmitters in many different ways and many different moments depending on what their external impulses cause within the cell structure, or how they “translate” the external impulses.


Once a neuron creates an output which is the translated version of its external impulse, the second neuron would receive it. And the second neuron would then react to the output, which would accordingly create another output from the second neuron.


To delve deeper:
A neuron’s output after receiving an external impulse is the “result” of the neuron translating the external impulse, but the output itself is NOT the translation. Rather, the translation itself is what CAUSES the output or makes the output POSSIBLE (seen from the neuron’s own subjective perspective and not from our objective observation of the neuron), which is equal to “whatever makes the neuron exist the way it does”—which is therefore equal to “whatever makes it react to external impulses the way it does”.

Each output / neurotransmitter has a predefined function. In overly simple terms, they either make the neuron more or less likely to fire.

When an output from the aforementioned neuron is transmitted to the second neuron and causes a reaction there according to its own function, and if the output’s function is to make the neuron more likely to fire, then the second neuron can also fire and give an output of its own.

Therefore, we can say that the second neuron has similarly translated the first neuron’s output into its own output. And as always, the “second output” is not the second neuron’s actual translation of the “first output”. The actual translation of the “first output” is equal to what MAKES the second neuron generate the “second output”—from its own subjective perspective.

And so it continues, like a chain.

Whatever makes the “nth” neuron generate the “nth output”—from the nth neuron’s subjective existence—is always “controlled” (but not created, since neurons cannot “create” other neurons) by the previous neuron’s output when it comes to generating the nth neuron’s output. And the way in which they are “controlled” and the causal reasons in which the outputs are “decided”—when seen from the subjective perspective of the group of neurons—could be defined as Qualia.

The so-called Qualia is then successfully conserved throughout the “chain” because all outputs are causally connected due to laws of physics.

This is just me re-describing the process of translation in different terms, where each different external impulses or inputs (that isn’t too vague for the neurons’ structure) can generate different outputs, because a neuron possesses a fixed inherent “function” (like a mathematical function but slightly different) within its cell structure that determines its output accordingly by “putting the external impulse / input through its said function”, which is a process governed by causality and logic, as this is all caused by the laws of physics. Thus the causal connection between each and all neuron’s subjective existence.

This “function” is merely our objective interpretation of the aforementioned property of the neuron that “makes the output possible = what causes the output”. Its “subjective interpretation” cannot be generated from our external observation because it is inherently objective, but with some mental gymnastics it can be assumed to be something like “Whatever makes that function exist from the perspective of the function itself”.

So in that sense, all neurons that contribute the most to our self-awareness share the same translating method, meaning that they share the same language or dialectics. Each neuron cell, or each of their outputs, can be considered a single building block of language or dialectic. And since I believe that neural outputs are generated by causality (causality as in “this generates that”, and “that generates this” etc….think inhibitory or exhibitory neurotransmitters, and their chemical types) due to reasons above, then it is possible to create dialectics using just neurons. The dialectics would then be in the subjective realm of the neurons, as the neurons are the only one that can “subjectively translate” those outputs from other neurons, which is a process required for the dialectics to be created.

And I believe that all this is the basis for thinking or feeling too, and not just any other stereotypical definitions for Qualia such as color or sound. These dialectics operate by logical causality, meaning the way they create dialectics is governed by logical causality. However “what” they create is not governed by logical causality of the external world, because this is purely inside a subjective realm.

Which is probably why we can’t explain colors purely in terms of mathematical equations. It would also be what makes the kind of red seen in apples slightly different from the kind of red seen in strawberries.

Think of “how they create” as logical functions, and think of “what they create” as drawing on a canvas. Drawing things on a canvas does require causality and logic, since the canvas, the brush, and the painter all abide by the laws of physics. However, the painter can still end up “drawing” a “scenario” that is impossible to describe with laws of physics.

Lastly, I personally believe that the subjective difference between different senses such as seeing, hearing, smelling, tasting, and touching, comes from the fact that each of the sensory neurons that belong to each of the senses react to different external impulses with different physical nature (thus different ways of affecting the neurons), for different reasons and ways, each creating a different causal chain of neural translations (though they are “different” causal chains that belong to each senses and body parts, they all still belong to the one big causal chain that creates the “self”, meaning they are subsets).


Stoppelmann wrote: December 25th, 2022, 6:47 am
2. Awareness itself, or consciousness, or even existence, is a state of being. Humanity has for millennia had traditions that seek to refine that, and modern day MBSR, or MBCT, as methods to reduce stress and provide cognitive therapy, which helps in relapse prevention with patients with recurrent depression or chronic pain, are a continuance of that. I trained in MBSR in 2002 and was able to pass on my experience to staff and patients with these problems, as far as they still had unrestricted cognitive abilities, and so I have a small knowledge of its potential.

The discovery of the distraction that our awareness is continually exposed to, much of it coming from our own inner voice, making a simple exercise of concentrating on our breath, or on our surroundings difficult, makes it clear why we often feel rushed or stressed. But the discovery of an observer, or a listener, at the core of our awareness, enables us to train to allow the distractions, even the distractions of depression and pain, to pass by without engaging with them. That is why such traditions that utilise meditation say that the real you is that silent observer. It is then a meditative exercise to create what has become known as the flow effect, in order to overcome the depressive episode or pain attack, or just the confusion of thoughts racing through the brain.

I doubt that without intervention the sorting function of the brain can be overcome, therefore we are reduced to watching the perceptions pass without engaging with them. The quality of awareness then has a depth that it is quite impossible to imagine occurring within a computation or calculation of predefined code. Rather, AI may have algorithms that imitate spontaneous decisions, but the scope is preordained, and given a framework in which reactions are possible.
First I’m gonna re-build upon my ideas on how awareness can be possible, which can be seen from the thread that I linked.

You must first remember the idea of the “chain” of neurons that I’ve mentioned above. My theory is that since it seems evident that within the chain, each neuron successfully “perceives” another neuron already in their own “subjective” way, all we have to do is make this chain “perceive itself” in order for it to be considered self-aware, which is quite literally just subjective self-perception.

And then I realized, just as how our brain can be classified into numerous “chains”, it can also be classified into numerous “rings”, which are just the aforementioned chains except they are now making a full circle instead of being a straight line. So when these chain of neurons make a full circle and become a ring, then as long as one neuron perceives another, all the neurons within the ring will perceive all the neurons within the ring. And each single neuron will perceive all other neurons within the ring, because each of the outputs that each neuron creates are caused by the outputs from the previous ring, which is also caused by the previous-previous ring, until it comes full circle so that one output is caused by all other outputs and vice versa. Thus as a result, if each single neurons perceive all other neurons, and if all the neurons perceive all other neurons, then this “ring” of neurons will perceive “it” self as a singular entity because all of its components / ways of generating outputs are causally connected, therefore causally singular.

And that’s just one ring within the brain. Due to how many neurons there are in the brain, the brain can be classified as / divided into billions of rings. All we have to do is to simplify it into a one giant, complex ring, where each of its components are rings of their own instead of neurons. The same logic would apply anyway.

So now that we have the “canvas” a.k.a self-awareness ready, we can finally store “whatever we are aware of” a.k.a Qualia, within the canvas.

If I perceive a vase, then is that going to fire up a group of neurons in my brain which is collectively shaped like a vase? No. The features of the vase, such as its color or its shape, may be perceived just by my optical neurons, but those neurons will transmit information to the giant pool of all other neurons, a.k.a the giant “ring” where my self-awareness resides, so that the information circulates that ring and becomes a part of the entirety of the neurons’ causality, thus “part of” the “entirety of” what I am aware of.

When I poke the jelly with my finger, to the jelly, I am defined as my finger. And to the jelly, my finger is defined as whatever effect it had to the jelly from its own subjective perspective.

Similarly, when we look at a neuron from a third person perspective, its external impulse, such as light of a certain frequency, is defined as what it objectively causes within itself (something we can see through observation). And when the neuron looks at itself from its own perspective, it now has an actual subjective perspective of “what it objectively causes within itself”, which is Qualia.

But a neuron cannot perceive itself. However, we can simply substitute the “ring” in the place of a neuron, because the ring can perceive itself. And then we can also substitute “what the external impulse causes within the neuron” into “(same as before)...causes within the neurons” / affects its single unified causality.

In other words:
What makes them react a certain way and what all of that means to the entirety of the brain—as in, what it causes to the entirety of the brain and how the brain subjectively perceives that as.

So, that’s basically the end of my theory on self-awareness and consciousness. With that said, regarding your opinion that all this might be incapable of replicating within computational systems, I personally think that there still is a chance of us being able to do so, mainly because all we might have to do is to replicate only the key aspects of neurons that makes self-awareness and Qualia possible, excluding all other negligible aspects that are a waste of time to replicate.

Keep in mind that below ideas are all theoretical and unproven, as I am not an expert in this field either. I just hope that I can offer at least some food for thought so that we can keep the discussion going, and then you and the other viewers of this topic can decide whether the ideas seem plausible or not. If whatever idea is not plausible and there is enough logical evidence for it, then I will be convinced.

With that said, one of the options would be to create an artificial neuron that physically replicates only the aforementioned key features within the neurons that generate consciousness, instead of replicating literally every single feature within the neurons.

Which includes: The ability to translate inputs (which are the outputs from the same kind) and as a result, opening up for a different kind of input. Then the ability to translate that different kind of input and produce an output as a result of sheer causality within the system. The ability to have its own outputs become the aforementioned inputs for its neighboring artificial neurons.

Adding on, the ability to form a direct causal relationship with the external world—i.e. Being able to instantly react to impulses from the external world in a way that makes use of all the components throughout the entire system.

With that said, the “different kinds of inputs” could be something that really meddles with the system in a fundamental way, so that the system can translate those inputs in a more clear and accurate manner—making it translate as much information as possible. Should cause a diverse range of interactions. Thus a wider variety of reasons to cause interactions or reactions between the artificial neurons, as well as more ways of reactions between or inside the artificial neurons.

(And being able to have those "reasons" and "ways of reacting" under total control, so that the translations actually BELONG to the self-aware consciousness, as well as the chain/ring of self-awareness can actually form under a single entity.)

Since neurons translate external impulses into the language they run in, the artificial neuron could do the exact same and translate external impulses into binary (that is determined by a specific function / equation), if that is the language they would run in. And furthermore, there could be a component within that artificial neuron that takes in those specific translated binaries as an input, and outputs a certain amount of signal that is determined by a specific function / equation, which the neighboring artificial neuron could then react to that “new” external impulse and so on.

The “equation” here that mathematically decides the way the specific input leads to a specific binary reaction, and the way the specific binary leads into a specific output signal of the artificial neuron, would be the substitute for how “the way a neuron reacts” chemically determines how the neuron will open which ion gates, or what neurotransmitter the neuron would release and how much. Therefore this one equation should be universal throughout all the neurons, just as how the laws of chemistry are also universal throughout all the neurons. This would also make possible for the other neuron to use the previous neuron’s output to successfully perceive **how the previous neuron translated its own previous neuron’s output** (since they share the same equation when it comes to translating = perceiving one thing as something—which the outputs are a product of), therefore successfully creating a shared perspective throughout neurons.
(Math being the substitute for chemistry here.)

The more detailed the equation, the better, as it solidifies more and more ways of determining the output / ways of translating—which would lead to a more enriched and diverse artificial consciousness.

This artificial neuron could first take in the input (the output of the previous artificial neuron) → instantly react to it by opening up to a different kind of input (doesn’t have to be from outside unlike neurons and its external ions, this can all still happen on the inside) (“the way it opens up / whatever opens up” or “whether it will open up” will be decided by the very previous input) → reacting once more to that different kind of input → produce a signal that becomes its output, and becomes the input of the next artificial neuron.
Interesting post GA. It's intriguing to me that neurons appear to be much like each other, whether they're part of the optical subsystem, hearing, pain or anything else. Which suggests the patterns of interactions (or their patterned effects) have a bearing on the 'flavour' of experience, and perhaps those patterns replicated in any substrate would have similar results. On the other hand, cells interact in all sorts of complex ways in our body, so what is it about brains specifically which manifest correlates of consciousnes which are 'globally' manifested as a specific, discrete self?

And there remains the issue of the explanatory gap between the physical processes which are apparently physically fully causally explained, and this extra 'what it is like' experiential state. The Mary's Room thought experiment points out that the most detailed physical explanation doesn't capture 'what it is like' to see red for example -

The thought experiment was originally proposed by Frank Jackson as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like "red", "blue", and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence "The sky is blue". ... What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?[1]

In other words, Jackson's Mary is a scientist who knows everything there is to know about the science of color, but has never experienced color. The question that Jackson raises is: once she experiences color, does she learn anything new? Jackson claims that she does.

https://en.wikipedia.org/wiki/Knowledge_argument

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 30th, 2022, 1:56 pm
by Sculptor1
"WIll" is he big question.
SInce AI has no will, then it cannot act in a willful way; either altruistically or selfishly.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 30th, 2022, 3:30 pm
by Stoppelmann
Sculptor1 wrote: December 30th, 2022, 1:56 pm "WIll" is he big question.
SInce AI has no will, then it cannot act in a willful way; either altruistically or selfishly.
This is also the experience of many working with people with dementia. We discover that the patients had no intention, and therefore there was no question of them "doing something on purpose" to annoy us, it was just that a stimuli had fostered a statement or request that was forgotten as soon as it was formulated. These people were running on automatic, and I think that this would probably be the absolute limit of AI. Therefore altruism or selfishness is out of the question.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 30th, 2022, 7:35 pm
by GrayArea
Gertie wrote: December 30th, 2022, 12:55 pm
Interesting post GA. It's intriguing to me that neurons appear to be much like each other, whether they're part of the optical subsystem, hearing, pain or anything else. Which suggests the patterns of interactions (or their patterned effects) have a bearing on the 'flavour' of experience, and perhaps those patterns replicated in any substrate would have similar results. On the other hand, cells interact in all sorts of complex ways in our body, so what is it about brains specifically which manifest correlates of consciousnes which are 'globally' manifested as a specific, discrete self?

And there remains the issue of the explanatory gap between the physical processes which are apparently physically fully causally explained, and this extra 'what it is like' experiential state. The Mary's Room thought experiment points out that the most detailed physical explanation doesn't capture 'what it is like' to see red for example -

The thought experiment was originally proposed by Frank Jackson as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like "red", "blue", and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence "The sky is blue". ... What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?[1]

In other words, Jackson's Mary is a scientist who knows everything there is to know about the science of color, but has never experienced color. The question that Jackson raises is: once she experiences color, does she learn anything new? Jackson claims that she does.

https://en.wikipedia.org/wiki/Knowledge_argument
The subjective (Sense of Self and Qualia) cannot be explained objectively through observed facts, and the objective (observed neural activities) cannot be explained subjectively. They are simply two different things. However, the very boundary that separates the subjective from the objective should be able to encompass both the subjective AND the objective—just because it separates them into two.

It is my belief that the said boundary can be known once one embraces the fact that subjectivity is possible due to objectivity creating the object, and that objectivity is possible because subjectivity creates the object.

So what we could do in order to explain subjectivity objectively is to embrace this inherent boundary between the subjectivity of our self and the objectivity of the brain. We simply do this by embracing our own subjective and objective existence (and also describing our objective existence because it is describable)—because that is the very act that “creates” the said boundary of our subjective existences.

The boundaries of an object are created by the object itself, while the object itself is also created by its boundaries. Both of these events happen at the same time for the existence of the object to happen, as one cannot happen yet without the other.

Think of it this way. When an object exists, it is both the object as perceived from the object’s perspective, and the object as perceived from outside of the object, that allows for the object’s physical existence to be possible. And what does it mean for the object to see itself from its perspective? It means the same thing as the brain seeing itself from its perspective a.k.a self-awareness.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: December 30th, 2022, 7:38 pm
by GrayArea
Gertie wrote: December 30th, 2022, 11:19 am
There are some obvious ways to go imo. One is to programme a computer to replicate neural connectivity, the Human Connectome Project is working on mapping the human brain but as it's the most complex thing in the universe we've encountered it's an unimaginably massive task. It potentially offers the Black Mirror scenario of down-loading your own consciousness and never dying as such. If that pans out, then presumably the mind would have the same human traits like altruism and selfishness. And potentially if you artificially augmented altruistric connectivity in there you'd get a more altruistic mind. If you bumped up the intelligence quota you'd get a more intelligent mind,etc. It would be like a designer baby, but to get an AI smarter or more altruistic than possible for a human, you'd be tinkering with the circuitry in unpredictable ways, because of the incredibly complex interactivity of the circuitry.

Another way would be to build a self-learing robot, with the ability to access and process huge amounts of information until it hit whatever threshold might exist to spark conscious experience. We'd have no way of predicting what it would be like to be such a differently 'evolved' mind. To assume it could even conceptualise itself as a 'self', a being existing independantly of the information it processes, would be a guess. Anthropomorphising such a being would be a mistake, and we might not even have the language or concepts to understand what it would be like. Nagel points out that what it is like to be a bat with sonar is unknowable to us, and here the difficulty of comparison might be beyond conceivability. Unless we somehow programme in behaviours we recognise as 'altruistic', 'willed', etc. It would be a step into the dark with no access to a light switch. 

Transhumanism might be another way to go. You can imagine replacing parts of brain circuitry with enhanced silicon parts, perhaps even the whole brain. And if the lights stayed on, you'd have a human-like minded AI.


But again, remember these scenarios make the assumption that simply mimicking substrate independent functionality (complex, inter-connective information processing) would provide the necessary and sufficient conditions for consciousness. We don't know if there's something about organic electro-chemical cellular brains which is necessary for consciousness, because we don't understand the mind-body relationship. For example Penrose and Hameroff's Orch OR theory suggests microtubules in neurons play a key role, where-as Tononi and Koch's IIT theory suggests the information processing function is sufficient (possibly implying a panpsychism where current computers, toasters, daffodils, rocks and particles have some form consciousness already, we just don't recognise it because it's so dissimilar to our own).

Which, if any are on the right track? Nobody knows. The mind-body relationship has implications for the most fundamental nature of reality. Anybody who thinks they do know, doesn't grasp Chalmers' Hard Problem. We don't even know enough to be able to reliably test an AI for consciousness, we don't even know enough to know if each other are conscious - it's all inference from similarity when you get down to it.
We can divide the self into two—what we perceive, and what we are.

In this sentence, “what we perceive” can be equated to Qualia, while “what we are” can be equated to the sense of self or self-awareness.

My assumption is that altruism either originates from, or can be strengthened by, the practice of equating “we” with “what we perceive”, as “what we perceive” is the external world in the translated form of Qualia. And that does not change the fact that our Qualia is inherently of the external world.

That is to say, what if we “made” a brain that inherently perceives “itself” as whatever it perceives?

Prior to this, I may have mentioned briefly that self-awareness can be boiled down to an object perceiving itself.

So, what if we can make this theoretical artificial brain become aware of “itself” by becoming aware of its external world? I suppose in order to do that, the boundary between the artificial brain and the external world should first be weakened or even broken down. How can we do this to any kind of object to begin with?

I have said before that depending on the perspective, an object and its external world can be perceived as one and the same. But this particular perspective isn’t all that subjective. It’s more so an objective way of classifying things by their objective features. This is just a matter of seeing the object and its external world as one group of atoms, which they are, as long as we define this group of atoms to be “all atoms in existence”. In this case, the objective feature that defines the group of atoms is simply “atoms that exist”.

Thus now we embrace the fact that an object and its external world are one and the same when they are classified the same.

In order for this artificial brain to be “one” with the external world while it becomes aware of itself, so that it equates itself with the external world, the brain must become self-aware through a series of physical actions that are CAUSED by the very fact that the brain and the external world can be classified as the same existences. This is because physically, the brain is one with the external world just by existing inside the world. Now all the brain has to do is to replicate that dynamic within its own mind, which would be possible if its mind were to be created as a product of that dynamic to begin with.

Though as you can see, all of this is very theoretical and abstract. Just think of it as a food for thought.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: January 1st, 2023, 9:59 am
by Pattern-chaser
Gertie wrote: December 30th, 2022, 11:19 am There are some obvious ways to go imo. One is to programme a computer to replicate neural connectivity, the Human Connectome Project is working on mapping the human brain but as it's the most complex thing in the universe we've encountered it's an unimaginably massive task. It potentially offers the Black Mirror scenario of down-loading your own consciousness and never dying as such. If that pans out, then presumably the mind would have the same human traits like altruism and selfishness. And potentially if you artificially augmented altruistric connectivity in there you'd get a more altruistic mind. If you bumped up the intelligence quota you'd get a more intelligent mind,etc. It would be like a designer baby, but to get an AI smarter or more altruistic than possible for a human, you'd be tinkering with the circuitry in unpredictable ways, because of the incredibly complex interactivity of the circuitry.
I think you're getting so far ahead of our current knowledge that this might be difficult or impossible. We don't know how the brain produces the mind — if it does? — never mind how to influence a particular attribute by adjusting the connectivity of a brain! I suspect that 'connectivity' is the mind, but that's only my guess...

Re: Will Sentient A.I be more altruistic than selfish?

Posted: January 5th, 2023, 5:05 am
by Gertie
GrayArea wrote: December 30th, 2022, 7:35 pm
Gertie wrote: December 30th, 2022, 12:55 pm
Interesting post GA. It's intriguing to me that neurons appear to be much like each other, whether they're part of the optical subsystem, hearing, pain or anything else. Which suggests the patterns of interactions (or their patterned effects) have a bearing on the 'flavour' of experience, and perhaps those patterns replicated in any substrate would have similar results. On the other hand, cells interact in all sorts of complex ways in our body, so what is it about brains specifically which manifest correlates of consciousnes which are 'globally' manifested as a specific, discrete self?

And there remains the issue of the explanatory gap between the physical processes which are apparently physically fully causally explained, and this extra 'what it is like' experiential state. The Mary's Room thought experiment points out that the most detailed physical explanation doesn't capture 'what it is like' to see red for example -

The thought experiment was originally proposed by Frank Jackson as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like "red", "blue", and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence "The sky is blue". ... What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?[1]

In other words, Jackson's Mary is a scientist who knows everything there is to know about the science of color, but has never experienced color. The question that Jackson raises is: once she experiences color, does she learn anything new? Jackson claims that she does.

https://en.wikipedia.org/wiki/Knowledge_argument
The subjective (Sense of Self and Qualia) cannot be explained objectively through observed facts, and the objective (observed neural activities) cannot be explained subjectively. They are simply two different things. However, the very boundary that separates the subjective from the objective should be able to encompass both the subjective AND the objective—just because it separates them into two.

It is my belief that the said boundary can be known once one embraces the fact that subjectivity is possible due to objectivity creating the object, and that objectivity is possible because subjectivity creates the object.

So what we could do in order to explain subjectivity objectively is to embrace this inherent boundary between the subjectivity of our self and the objectivity of the brain. We simply do this by embracing our own subjective and objective existence (and also describing our objective existence because it is describable)—because that is the very act that “creates” the said boundary of our subjective existences.

The boundaries of an object are created by the object itself, while the object itself is also created by its boundaries. Both of these events happen at the same time for the existence of the object to happen, as one cannot happen yet without the other.

Think of it this way. When an object exists, it is both the object as perceived from the object’s perspective, and the object as perceived from outside of the object, that allows for the object’s physical existence to be possible. And what does it mean for the object to see itself from its perspective? It means the same thing as the brain seeing itself from its perspective a.k.a self-awareness.
From your post -

It is my belief that the said boundary can be known once one embraces the fact that subjectivity is possible due to objectivity creating the object, and that objectivity is possible because subjectivity creates the object.

I don't understand what you actually mean by this?  Can you spell it out?  Then the rest of your post might make sense to me.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: January 5th, 2023, 5:48 am
by Gertie
Pattern-chaser wrote: January 1st, 2023, 9:59 am
Gertie wrote: December 30th, 2022, 11:19 am There are some obvious ways to go imo. One is to programme a computer to replicate neural connectivity, the Human Connectome Project is working on mapping the human brain but as it's the most complex thing in the universe we've encountered it's an unimaginably massive task. It potentially offers the Black Mirror scenario of down-loading your own consciousness and never dying as such. If that pans out, then presumably the mind would have the same human traits like altruism and selfishness. And potentially if you artificially augmented altruistric connectivity in there you'd get a more altruistic mind. If you bumped up the intelligence quota you'd get a more intelligent mind,etc. It would be like a designer baby, but to get an AI smarter or more altruistic than possible for a human, you'd be tinkering with the circuitry in unpredictable ways, because of the incredibly complex interactivity of the circuitry.
I think you're getting so far ahead of our current knowledge that this might be difficult or impossible. We don't know how the brain produces the mind — if it does? — never mind how to influence a particular attribute by adjusting the connectivity of a brain! I suspect that 'connectivity' is the mind, but that's only my guess...
From the above -
I think you're getting so far ahead of our current knowledge that this might be difficult or impossible. We don't know how the brain produces the mind — if it does? — never mind how to influence a particular attribute by adjusting the connectivity of a brain!
Maybe.  But if we could eventually map out and copy the human connectome and identify the bits which add up to aspects of altruism (neurological bonding mechanisms, mirror neurons, whatever) there's no in principle prob I see to bumping up those bits via programming. 

 I suspect that 'connectivity' is the mind, but that's only my guess...

Connectivity as a process (eg brains/matter in motion) looks to be at least a part of it, because dead brains don't display recognisable signs of experience.  But some panpsychists might say experience is woven into the substance of matter and might still be there as something we don't recognise in dead brains.  Likewise rocks, particles, everything.  Patterns of connectivity might be what creates the particulars of  ''what it is like-ness'' of experience, and we only recognise ones similar to ours. (That all neurons seem to be much the same perhaps points to patterns of connectivity being associated with particular 'flavours' of 'what it is like' experience.)


Still, connectivity has to be connectivity of something (a substrate) I think -  agreed?   The question here is whether the substrate-something needs particular properties which brains have and silicon doesn't. Panpsychists might say all substrates, all matter, has such properties, has experience 'built in'. And mimmicking human brain connectivity will simply create experience similar enough to our own to be recognisable to us. Physicalist substance monists might say mimicking the patterns of connectivity of any substrate can produce experience, or that there's some necessary property in brains which will be lost (Penrose and Hammeroff's Orch OR for example). Then substance dualists and idealists will have different takes too. Who knows...

Re: Will Sentient A.I be more altruistic than selfish?

Posted: January 5th, 2023, 6:08 am
by GrayArea
Gertie wrote: January 5th, 2023, 5:05 am
GrayArea wrote: December 30th, 2022, 7:35 pm
Gertie wrote: December 30th, 2022, 12:55 pm
Interesting post GA. It's intriguing to me that neurons appear to be much like each other, whether they're part of the optical subsystem, hearing, pain or anything else. Which suggests the patterns of interactions (or their patterned effects) have a bearing on the 'flavour' of experience, and perhaps those patterns replicated in any substrate would have similar results. On the other hand, cells interact in all sorts of complex ways in our body, so what is it about brains specifically which manifest correlates of consciousnes which are 'globally' manifested as a specific, discrete self?

And there remains the issue of the explanatory gap between the physical processes which are apparently physically fully causally explained, and this extra 'what it is like' experiential state. The Mary's Room thought experiment points out that the most detailed physical explanation doesn't capture 'what it is like' to see red for example -

The thought experiment was originally proposed by Frank Jackson as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like "red", "blue", and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence "The sky is blue". ... What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?[1]

In other words, Jackson's Mary is a scientist who knows everything there is to know about the science of color, but has never experienced color. The question that Jackson raises is: once she experiences color, does she learn anything new? Jackson claims that she does.

https://en.wikipedia.org/wiki/Knowledge_argument
The subjective (Sense of Self and Qualia) cannot be explained objectively through observed facts, and the objective (observed neural activities) cannot be explained subjectively. They are simply two different things. However, the very boundary that separates the subjective from the objective should be able to encompass both the subjective AND the objective—just because it separates them into two.

It is my belief that the said boundary can be known once one embraces the fact that subjectivity is possible due to objectivity creating the object, and that objectivity is possible because subjectivity creates the object.

So what we could do in order to explain subjectivity objectively is to embrace this inherent boundary between the subjectivity of our self and the objectivity of the brain. We simply do this by embracing our own subjective and objective existence (and also describing our objective existence because it is describable)—because that is the very act that “creates” the said boundary of our subjective existences.

The boundaries of an object are created by the object itself, while the object itself is also created by its boundaries. Both of these events happen at the same time for the existence of the object to happen, as one cannot happen yet without the other.

Think of it this way. When an object exists, it is both the object as perceived from the object’s perspective, and the object as perceived from outside of the object, that allows for the object’s physical existence to be possible. And what does it mean for the object to see itself from its perspective? It means the same thing as the brain seeing itself from its perspective a.k.a self-awareness.
From your post -

It is my belief that the said boundary can be known once one embraces the fact that subjectivity is possible due to objectivity creating the object, and that objectivity is possible because subjectivity creates the object.

I don't understand what you actually mean by this?  Can you spell it out?  Then the rest of your post might make sense to me.
Sorry if my words didn't do the idea justice. Forgive me if I cannot explain it in a more comprehensive way but first of all, to explain the context behind that sentence, what I am saying is that in order to describe the subjective using objective descriptive methods, we will have to simply let our subjective existence describe its own subjectivity, and let our objective existence describe its own objectivity (while our subjective existence perceives that description through words that we “objectively write” and such).

But since we are both subjective and objective beings(Any object can be both considered as ITSELF or as a PART OF THE WORLD)—as all objects are, then by doing so, the boundary between the two which both unifies and separates them will automatically belong to our own existence (thus us automatically "embracing" that boundary as a part of our existence) while doing its own job of describing the combination of both—so that we may have successfully unified the subjective and the objective description of the external world and its aspects such as lightwaves or sound.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: January 5th, 2023, 10:55 am
by Pattern-chaser
Pattern-chaser wrote: January 1st, 2023, 9:59 am I think you're getting so far ahead of our current knowledge that this might be difficult or impossible. We don't know how the brain produces the mind — if it does? — never mind how to influence a particular attribute by adjusting the connectivity of a brain!
Gertie wrote: January 5th, 2023, 5:48 am Maybe.  But if we could eventually map out and copy the human connectome and identify the bits which add up to aspects of altruism (neurological bonding mechanisms, mirror neurons, whatever) there's no in principle prob I see to bumping up those bits via programming. 
...
Gertie wrote: January 5th, 2023, 5:48 am Connectivity as a process (eg brains/matter in motion) looks to be at least a part of it, because dead brains don't display recognisable signs of experience.  But some panpsychists might say experience is woven into the substance of matter and might still be there as something we don't recognise in dead brains.  Likewise rocks, particles, everything.  Patterns of connectivity might be what creates the particulars of  ''what it is like-ness'' of experience, and we only recognise ones similar to ours. (That all neurons seem to be much the same perhaps points to patterns of connectivity being associated with particular 'flavours' of 'what it is like' experience.)


Still, connectivity has to be connectivity of something (a substrate) I think -  agreed?   The question here is whether the substrate-something needs particular properties which brains have and silicon doesn't. Panpsychists might say all substrates, all matter, has such properties, has experience 'built in'. And mimmicking human brain connectivity will simply create experience similar enough to our own to be recognisable to us. Physicalist substance monists might say mimicking the patterns of connectivity of any substrate can produce experience, or that there's some necessary property in brains which will be lost (Penrose and Hammeroff's Orch OR for example). Then substance dualists and idealists will have different takes too. Who knows...
All that I know of networks — not as much as I'd like! — tells me that it isn't possible to narrow down network function to specific nodes or connections. To a great extent, it is the whole network that has a given effect. I don't think it would be possible to identify the 'altruism node' (or connection), because there is no such node. Altruism is probably (i.e. I'm guessing!) an attribute whose functionality is distributed throughout the network. I wish I could be sure about this, and a quick search of the interweb hasn't helped. Links to anything that actually describes how networks work, how the nodes and their connections result in certain processes taking place, seem thin on the ground.

I learned what I learned in practice, from actual networks of computer equipment, and the interweb itself (to some extent). Yes, I picked up bit of 'proper' theory here and there, from articles in trade journals and the like, but that's about it. I suspect that maybe 'network experts' are also thin on the ground?

As I said before, I think that network function is largely dependent on the pattern of connections that the net exhibits. The same nodes, connected differently, could result in surprisingly-different network function(s), I believe. 🤔