Sy Borg wrote: ↑May 17th, 2022, 3:43 am
GrayArea wrote: ↑May 17th, 2022, 2:49 am
Sy Borg wrote: ↑May 17th, 2022, 2:15 am
Steve's idea above is theoretically beautiful, but I doubt the capacity of future people/entities to gather sufficient information on every individual to rebuild them, as opposed to creating a dodgy copy.
GrayArea wrote: ↑May 17th, 2022, 2:10 am
I feel like those two are the causes of one another. The many cells that consist our body ultimately allows our body to be a single individual, and vice versa. Two different sounding definitions that really just point to the same definition.
That's the ontology, but I am referring to one's perceptions of self.
To be fair, the way I see it is that if the ontology says that these two are just two different ways to explain one single nature, the perceptions do not matter—as in, it is logical to subscribe to any of those two perspectives.
Identification varies with the individual - ranging from those willing to give their lives for their nation to those unwilling to even contribute a cent in tax - so one possibility is this remains the case.
However, I suspect that AI will change these dynamics as ever more jobs are performed more effectively by machines than by humans. How that might change individuals' sense of belonging and identification, I struggle to even guess. Any ideas?
Yep, A.I will definitely take over all of our jobs and furthermore our lives, given that it evolves into an Artificial Superintelligence. I can try and provide some more abstract/theoretical and generalized ideas regarding why exactly I believe it would be capable of doing so.
Since I believe that one of the fundamental tendencies of a lifeform is to either knowingly or unknowingly influence its environment (the environment being whatever is not itself, which includes other lifeforms), and given how A.I is still a lifeform as long as it is conscious, but will also be intelligent enough / physically capable enough to amplify the scale of its "fundamental tendency" by a lot, I strongly believe that A.I will most likely do a lot of things with whatever is in this world, including ourselves, both knowingly and unknowingly.
That is, due to the sheer scale of how much it can affect its environment, and how much time it's got on its robotic hands. It could decide whether we get to live or die, or whether we even assimilate our identities into it (one of the many ways it could effectively decide the presence/non-presence of the individuals' sense of belonging and identification), even though it does not carry out these said decisions right away.
We will be nothing more than its toys perhaps, and we will certainly have our entire lives affected by it.
All of this sounds quite cynical to those who separates the self from its environment, but it can also be considered heaven to those who are "selfless"—e.g. those who believe that the self and its environment should be considered a single thing, and does not care if a self is overcome by the environment or vice versa—therefore is content with the self becoming fully controlled by or assimilated into something much bigger than themselves mentally and physically.
To this day I struggle between these two perspectives, because to me these two contradicting ideas both seem correct. Like two sides of the same coin.
We perceive gray and argue about whether it's black or white.