Count Lucanor wrote: ↑October 17th, 2024, 12:10 pm
Sy Borg wrote: ↑October 17th, 2024, 12:00 am
Your first error is in thinking that nature can do anything but humans – who you do not class as part of nature – have additional limitations.
That's not what I have said. Humans are obviously a product of nature, but it is by their own nature that humans have created culture, a sort of second nature specific to the species, that differs from the spontaneous, undirected processes of the rest of the natural kingdom, so as to be able to transform the environment to pursuit interests that surpass the instinctive level, and also willfully "reengineer" themselves, which explains why we have a human history full of possibilities, but not a tiger history (at least not an interesting one), because tigers will be doing always the same that their nature dictates. Is literature natural? In a broad sense, yes, but actually, no. So it is with technology.
Of course technology is natural. It’s emergent nature.
The rest of your post supports the position that AI will re-engineer itself. First humans will program it to re-engineer, to improve, itself. From there, the re-engineered AI will continue to re-engineer itself as necessary.
Count Lucanor wrote: ↑October 17th, 2024, 12:10 pm
Sy Borg wrote: ↑October 17th, 2024, 12:00 am
Humans and their creations ARE nature, as much so as trees, mountains, streams and kangaroos. All of these (and us) are the product of the Pale Blue Dot’s tendency towards equilibrium, a push and pull between entropy and negentropy. Humans, like all species, need to be self-interested to survive, which requires the mental (or reflexive) separation of self and environment to survive. It is that mental separation that leads to deem things either “natural” or “artificial”, not because these divisions are real. We may see ourselves as separate to the environment but that that subjective impression does not reflect the physical reality.
But putting all in a big sack called "nature" solves nothing, explains nothing. There is not the sole overarching process of nature to which we resort to explain what is before our eyes, but the many processes of nature, surely all intertwined, but still with their own dynamics and emergent properties. Nature produces many systems and they all work differently when you get down to what's specific to each domain.
Putting everything in a big sack called "nature" counters the common misconception (promoted by you) that humans are not part of nature. This is an error in thinking, caused by the survival-based need to draw a line between oneself and one’s environment. Humanity as a whole does this too. The separation is only true as a subjective impression. In objective reality, AI is as much a part of nature as fungi, volcanoes, and clouds. The notion you are groping for is that human technology is an emergent property of nature, and operates by its domain’s rules rather than general rules.
Still, on a more prosaic level, AI is capable of performing an increasing range of jobs. There is no reason to think that there will be a limit to the kinds of jobs AI will be able to do. It’s a matter of time. AI might or might not be able to do the most complex jobs in our lifetimes, but history does not stop when we die. AI will continue developing, and there are many potentials.
If sentience is a useful property, then in time AI will probably become sentient. Presumably, sentience would emerge when AI is “in the field”, accumulating experience interacting with its environment.
Count Lucanor wrote: ↑October 17th, 2024, 12:10 pm
Sy Borg wrote: ↑October 17th, 2024, 12:00 am
AI wrote: The advent of self-replicating AI presents numerous potential applications across various sectors. Here’s a detailed exploration of these uses:
1. Tailored AI Solutions for Specific Tasks
Self-replicating AI can create specialized models that are tailored to perform specific tasks in various fields such as healthcare, engineering, and environmental monitoring. For instance, in healthcare, AI could autonomously design models that analyze patient data to provide personalized treatment plans or predict disease outbreaks based on historical data.
2. Enhanced Learning and Adaptation
These AI systems can learn from the successes and failures of their predecessors, allowing them to evolve rapidly. This capability can lead to more efficient development cycles where new models are continuously improved upon without human intervention. For example, in climate science, self-replicating AI could develop models that adapt to changing environmental conditions more swiftly than traditional methods.
3. Automation of Complex Processes
Self-replicating AI could automate complex processes across industries. In manufacturing, for example, it could design and deploy smaller robots capable of performing specific tasks on assembly lines without human oversight. This would not only increase efficiency but also reduce labor costs.
4. Environmental Monitoring and Conservation Efforts
In conservation efforts, self-replicating AI could be deployed to monitor endangered species or track environmental changes autonomously. These systems could analyze vast amounts of data from sensors placed in natural habitats and adjust their monitoring strategies based on real-time findings.
5. Disaster Response and Management
Self-replicating AI could play a crucial role in disaster response by creating models that predict natural disasters or assess damage after an event occurs. These systems could autonomously gather data from affected areas and deploy smaller drones or robots for search-and-rescue missions.
6. Research and Development Acceleration
In research settings, self-replicating AI can significantly accelerate the pace of innovation by generating new hypotheses or experimental designs based on existing knowledge without requiring human input. This capability can lead to breakthroughs in various scientific fields by exploring avenues that may not have been considered by researchers.
7. Ethical Considerations and Governance Models
As self-replicating AI evolves, it will be essential to develop ethical frameworks and governance models to ensure responsible use. This includes establishing guidelines for transparency, accountability, and bias mitigation in the autonomous design processes.
In summary, the potential uses for self-replicating AI span a wide range of applications that promise enhanced efficiency, adaptability, and innovation across multiple sectors while also necessitating careful consideration of ethical implications.
It's hard not to be bored by the inane, impersonal outputs of text engines. They can be a useful tool, I use them, too, but they are not up to the challenge for certain tasks. I had to prohibit them to my students, for no other reason that they ended up spreading lies and misinformation, when not actual nonsense. It's also very easy to fool them and manipulate them just with the text prompts, to get what the user wants them to say. Anyway, I guess that the language engine had to resort to so many "cans" and "coulds" (I counted 13 in just 7 bullet points), given the null concrete evidence of anything actually being implemented. At this moment "Research into self-replication of AI" looks more like some tech guys speaking into their echo chamber about their dreams of a singularity.
Yes, AI is boring. Then again, your stick-in-the-mud truisms are boring too. I also can induce snores. None of it matters.
When speaking about the future, the words “could” and “can” are obviously more appropriate than “is” and “will”. That a tick for AI, not a cross.
Re: the point I was making with the AI quote - that there are commercial and political pressures that drive research into AI self-replication and self-improvement. Resources are going into these areas. These are not blue skies projects like interstellar travel, devising a TOE or the birth of the universe. Self-replication and self-improvement are clearly realisable potentials for AI systems.
Count Lucanor wrote: ↑October 17th, 2024, 12:10 pm Sy Borg wrote: ↑October 17th, 2024, 12:00 am
It is clear that self-improving AI would be extremely useful too.
No, it's not clear at all. A text engine saying it doesn't make it true.
Glib.
If AI can create better AIs than humans can, then that is a competitive advantage.
Count Lucanor wrote: ↑October 17th, 2024, 12:10 pm
Sy Borg wrote: ↑October 17th, 2024, 12:00 am
It’s just a matter of putting the two technologies together – self-replication and self-improvement. This would be invaluable for space exploration and mining.
Two technologies that don't exist.
Sy Borg wrote: ↑October 17th, 2024, 12:00 am
Once these units are in the field and allowed to evolve as they see fit, they will be reacting to environmental and logistical pressures. If this occurs over deep time, even after humans themselves have gone extinct, it’s hard to imagine that they will always remain non-sentient.
That's just what the tech guys dream of, but they still have to figure out how it is done, and nobody knows if it can be done.
Of course, no one knows if it can be done. That’s why the question was posed. No one knows.
Resorting to ad homs against “tech guys” only weakens your position, suggesting that you need to resort to gimmicks.
How complex do entities and interactions need to be for sentience to be necessary?
Count Lucanor wrote: ↑October 17th, 2024, 12:10 pm
Sy Borg wrote: ↑October 17th, 2024, 12:00 am
Remember, sentience is not something that suddenly happens. Consider your own journey from embryo to adult. You can’t remember becoming sentient. There would have been a gradual dawning, like a light gradually lighting up everything around you in a world that had previously been pitch black.
Generally, attributes that are useful to persistence (accidentally or otherwise) will emerge. Like I say, AI might choose not to be sentient, seeing it as a handicap. While sentience ostensibly serves biology well, hence the plethora of sentient species, it may not be as useful to entities that calculate probabilities a million times more quickly than humans do.
Emotions emerged because organisms needed to be able to react to events more quickly than they could think them through. To that end, emotions are like subroutines that are called when certain variables appear.
AI, on the other hand, can calculate the steps needed to meet challenges quickly enough not to need the sentience “cheat code”.
Then again, that’s assuming a human standard of engagement. If self-replicating AIs are in a situation where they compete with others for resources or information, the ones that act the most quickly and accurately will proliferate more than their competition.
If such entities become sentient, with new subroutines designed to speed up processing in order to out-compete other entities, it’s possible that their kind of sentience would be too fast for us to detect it as sentience. To us, their rapid-fire calculations leading to interactions (or non-interactions) would seem to be indecipherable activities.
Lots of wishful thinking in hypothetical scenarios. Let's cut to the chase: I'm still waiting for an answer to my question. If there's a property called sentience that applies to both non-living and living beings, what is it? What are those properties? The only ones you mentioned are the properties of biological sentience, which you had said was not the case of this new sentience.
Again, repeating this “wishful thinking” gimmick is not convincing debating. It suggests that you are just brushing off inconvenient points rather than exerting the mental energy needed to engage.
As for the “what is sentience” – that is basically the question I was addressing in the above post. In brief, it is internality. I feels like something to be sentient. The difference between us and philosophical zombies, the latter being the current state of AI.
I say that AI will probably not always be philosophical zombies.
You say AI will always be philosophical zombies.
That’s the disagreement in a nutshell.
It’s actually up to you to explain why carboniferous hydrous forms can be the only possible housing for sentience.