Sy Borg wrote: ↑July 26th, 2024, 7:49 pmThought I might revisit this. It's a rainy Saturday morning, so no walking yet.
I must say I love your passionate replies to topics, with evident notion and awareness of events on a truly global scale. I can imagine that it does make a difference for many users, perhaps many of whom are just 'lurking' and reading the posts. I once mentioned it before, but your role as admin (with the help of several moderators) does apear to define the forum.
Pattern-chaser wrote: ↑June 13th, 2023, 8:08 amI've looked at most of the online philosophy forums, over the years. This one is the most civilised one I've come across, which is why I'm still here. Nothing is perfect, but you can have worthwhile conversations here, and maybe even learn something. I have. Just the once, you understand...!
value wrote: ↑June 29th, 2023, 4:40 amYou would have to thank the admins for that and in specific Greta/Sy Borg. I've noticed her passionate comments in topics related to the meaning of life and in topics in which people might be struggling with the consideration of suicide.
When I shortly became an administrator to recover a domain expiration in January 2021, I noticed that quite an effort is made to administrate the forum, meanwhile with almost no conflicts about moderation and almost all users feeling welcome. It seems to be a great accomplishment that deserves gratitude from the users.
Greta/Sy Borg mentioned in a PM that she was provided with the admin role apparently randomly at some point in time. She is the only admin next to the founder. It seems that it has been a matter of great luck with regard the quality of forum administration in the past 10+ years!
On topic:
Sy Borg wrote: ↑July 26th, 2024, 7:49 pmThis area moves so fast, and much has happened since the OP over a year ago. Jut putting aside the UBI idea for a moment (and the coercive power of those holding the purse strings) I'm like to chat about the sentience aspect - especially what humans minds can do that AI can't. It comes down to function. Humans are the actors, and AI an extra appendage or sorts, an addition.
It seems that AI has no wants or preferences. However, even if it did, this would be akin to a draft horse a couple of centuries ago having preferences. Even if they are present, humans logically care most about their own preferences.
Still, AI is taking over, in a sense. Not because it intends or wants to, since it appears that I doesn't have those capacities. The functional situation is another matter; AI does a lot of things more economically than humans, and humans are rapidly being replaced in industry by smart machines.
I would share your vision, but it is important to consider the rapid advancement of "Organoid Intelligence" or OI that uses biological computing hardware or 'life tissue'. This does change the situation, especially when considering that
nobody today knows how AI actually works.
The performance of AI is an
actual mystery that amazes even the specialists that create it.
A Google Deepmind engineer mentioned the following:
"LLM AI models are more like plants or lab-grown tissue than software. Humans build scaffolding, add data, and kick off the training process. After that, the model grows and evolves on its own. After millions of iterations of training the model to predict words to complete sentences and answer questions, it begins to respond with complex, often very human-sounding answers."
His concluding reflection on how AI works:
“This bizarre and arcane process somehow works incredibly well,” said Neel Nanda, a research engineer at Google Deepmind.
The satus quo of science today is: "
scientists are going to try to understand how AI works", apparently "
against the odds".
Scientists are trying to unravel the mystery behind modern AI
https://www.vox.com/future-perfect/3627 ... uroscience
With Organoid Intelligence (OI) this situation would become even more mysterious, since it would involve actual life that fundamentally underlays it, potentially introducing a moral component.
Sy Borg wrote: ↑July 26th, 2024, 7:49 pmSo it's easy to see how ever more people will rely on welfare, which may transition to UBI. In a sense, people are already being paid (by free access to content) by ostensibly viewing ads. As mentioned in the OP, regular people are providing a service, their presence helping to train AIs. It's hard to see people being formally paid to be studied remotely, only for specific research projects.
I think there will be growing underground economies too, involving barter and minor financial transactions, eg. paying a local tradesperson for repairs work.
Today's societal environment is based on a fundamental demand for human labor. That that situation doesn't align with the future might already be visible in today's children and youth who are facing increasing difficulty to find purpose and meaning at work and school.
The ‘disconnected youth’ movement is growing as more Gen Zers struggle to find purpose at school and work
https://www.businessinsider.com/gen-z-d ... &r=US&IR=T
When children and youth growing up today look 20-30 years into the future, they face a situation in which the meaning of work is likely to be non-existent, therefore school and work simply aren’t meaningful for them today.
Giving these future generations a basic security to ‘play around’ through a UBI basic income might enable them to philosophically innovate to discover new ways to find purpose and meaning in life in a world in which human labor, both physical and knowledgeworker labor, is fundamentally obsolete. It might enable them to create visions for the 20-30 year future today, as past generations have been able to when growing up in an environment that fundamentally demanded their labor.
Besides securing the future generation through UBI to protect them from a fundamental outlook on a world in which work has no meaning, at question remains: what will happen when OI becomes sentient AI. Not in 10 years time, but in a few years from now? What effect would the prospect of Organoid Intelligence have on children and youth growing up today? This might be an important question as well.