Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#468955
Sy Borg wrote: October 15th, 2024, 3:31 pm You suggest that I’m jumping the gun with self-replicating AI because it has not yet been realised.
This is just a thought, but is self-replication even relevant? An AI is (currently) just a machine. If you have a supply of parts, then you can build as many as you want to. And the AIs can even control the production line, if we wish.

I think the core concern regarding AIs is when/if we allow them to modify their own code. That means they could change (themselves), and adapt to new circumstances in the world. *That* is the salient point, isn't it? If that happens, then sentience becomes possible, I think? Possible, and uncontrollable by humans...
Favorite Philosopher: Cratylus Location: England
#468956
Sy Borg wrote: October 17th, 2024, 12:00 am Your first error is in thinking that nature can do anything but humans – who you do not class as part of nature – have additional limitations.
That's not what I have said. Humans are obviously a product of nature, but it is by their own nature that humans have created culture, a sort of second nature specific to the species, that differs from the spontaneous, undirected processes of the rest of the natural kingdom, so as to be able to transform the environment to pursuit interests that surpass the instinctive level, and also willfully "reengineer" themselves, which explains why we have a human history full of possibilities, but not a tiger history (at least not an interesting one), because tigers will be doing always the same that their nature dictates. Is literature natural? In a broad sense, yes, but actually, no. So it is with technology.
Sy Borg wrote: October 17th, 2024, 12:00 am Humans and their creations ARE nature, as much so as trees, mountains, streams and kangaroos. All of these (and us) are the product of the Pale Blue Dot’s tendency towards equilibrium, a push and pull between entropy and negentropy. Humans, like all species, need to be self-interested to survive, which requires the mental (or reflexive) separation of self and environment to survive. It is that mental separation that leads to deem things either “natural” or “artificial”, not because these divisions are real. We may see ourselves as separate to the environment but that that subjective impression does not reflect the physical reality.
But putting all in a big sack called "nature" solves nothing, explains nothing. There is not the sole overarching process of nature to which we resort to explain what is before our eyes, but the many processes of nature, surely all intertwined, but still with their own dynamics and emergent properties. Nature produces many systems and they all work differently when you get down to what's specific to each domain.
Sy Borg wrote: October 17th, 2024, 12:00 am You incorrectly described the process of human inventions as “sit and wait”. No, that what shmucks like us, sitting on the sidelines are doing. Researchers are actively working towards self-replication. Why would they do that?
You misunderstood me, never said human inventions are a “sit and wait” process. I actually said it's not reasonable to think it only requires sitting and waiting to happen as an spontaneous natural process. One has to actually put minds and hands to work and that makes a distinction between the possibility of ETI and the possibility of self-replicating self-improving AI. The latter is not going to happen "because the overarching process of nature" will have to produce it.
Sy Borg wrote: October 17th, 2024, 12:00 am
AI wrote: The advent of self-replicating AI presents numerous potential applications across various sectors. Here’s a detailed exploration of these uses:
1. Tailored AI Solutions for Specific Tasks
Self-replicating AI can create specialized models that are tailored to perform specific tasks in various fields such as healthcare, engineering, and environmental monitoring. For instance, in healthcare, AI could autonomously design models that analyze patient data to provide personalized treatment plans or predict disease outbreaks based on historical data.
2. Enhanced Learning and Adaptation
These AI systems can learn from the successes and failures of their predecessors, allowing them to evolve rapidly. This capability can lead to more efficient development cycles where new models are continuously improved upon without human intervention. For example, in climate science, self-replicating AI could develop models that adapt to changing environmental conditions more swiftly than traditional methods.
3. Automation of Complex Processes
Self-replicating AI could automate complex processes across industries. In manufacturing, for example, it could design and deploy smaller robots capable of performing specific tasks on assembly lines without human oversight. This would not only increase efficiency but also reduce labor costs.
4. Environmental Monitoring and Conservation Efforts
In conservation efforts, self-replicating AI could be deployed to monitor endangered species or track environmental changes autonomously. These systems could analyze vast amounts of data from sensors placed in natural habitats and adjust their monitoring strategies based on real-time findings.
5. Disaster Response and Management
Self-replicating AI could play a crucial role in disaster response by creating models that predict natural disasters or assess damage after an event occurs. These systems could autonomously gather data from affected areas and deploy smaller drones or robots for search-and-rescue missions.
6. Research and Development Acceleration
In research settings, self-replicating AI can significantly accelerate the pace of innovation by generating new hypotheses or experimental designs based on existing knowledge without requiring human input. This capability can lead to breakthroughs in various scientific fields by exploring avenues that may not have been considered by researchers.
7. Ethical Considerations and Governance Models
As self-replicating AI evolves, it will be essential to develop ethical frameworks and governance models to ensure responsible use. This includes establishing guidelines for transparency, accountability, and bias mitigation in the autonomous design processes.
In summary, the potential uses for self-replicating AI span a wide range of applications that promise enhanced efficiency, adaptability, and innovation across multiple sectors while also necessitating careful consideration of ethical implications.
It's hard not to be bored by the inane, impersonal outputs of text engines. They can be a useful tool, I use them, too, but they are not up to the challenge for certain tasks. I had to prohibit them to my students, for no other reason that they ended up spreading lies and misinformation, when not actual nonsense. It's also very easy to fool them and manipulate them just with the text prompts, to get what the user wants them to say. Anyway, I guess that the language engine had to resort to so many "cans" and "coulds" (I counted 13 in just 7 bullet points), given the null concrete evidence of anything actually being implemented. At this moment "Research into self-replication of AI" looks more like some tech guys speaking into their echo chamber about their dreams of a singularity.
Sy Borg wrote: October 17th, 2024, 12:00 am It is clear that self-improving AI would be extremely useful too.
No, it's not clear at all. A text engine saying it doesn't make it true.
Sy Borg wrote: October 17th, 2024, 12:00 am It’s just a matter of putting the two technologies together – self-replication and self-improvement. This would be invaluable for space exploration and mining.
Two technologies that don't exist.
Sy Borg wrote: October 17th, 2024, 12:00 am Once these units are in the field and allowed to evolve as they see fit, they will be reacting to environmental and logistical pressures. If this occurs over deep time, even after humans themselves have gone extinct, it’s hard to imagine that they will always remain non-sentient.
That's just what the tech guys dream of, but they still have to figure out how it is done, and nobody knows if it can be done.
Sy Borg wrote: October 17th, 2024, 12:00 am Remember, sentience is not something that suddenly happens. Consider your own journey from embryo to adult. You can’t remember becoming sentient. There would have been a gradual dawning, like a light gradually lighting up everything around you in a world that had previously been pitch black.

Generally, attributes that are useful to persistence (accidentally or otherwise) will emerge. Like I say, AI might choose not to be sentient, seeing it as a handicap. While sentience ostensibly serves biology well, hence the plethora of sentient species, it may not be as useful to entities that calculate probabilities a million times more quickly than humans do.

Emotions emerged because organisms needed to be able to react to events more quickly than they could think them through. To that end, emotions are like subroutines that are called when certain variables appear.

AI, on the other hand, can calculate the steps needed to meet challenges quickly enough not to need the sentience “cheat code”.

Then again, that’s assuming a human standard of engagement. If self-replicating AIs are in a situation where they compete with others for resources or information, the ones that act the most quickly and accurately will proliferate more than their competition.

If such entities become sentient, with new subroutines designed to speed up processing in order to out-compete other entities, it’s possible that their kind of sentience would be too fast for us to detect it as sentience. To us, their rapid-fire calculations leading to interactions (or non-interactions) would seem to be indecipherable activities.
Lots of wishful thinking in hypothetical scenarios. Let's cut to the chase: I'm still waiting for an answer to my question. If there's a property called sentience that applies to both non-living and living beings, what is it? What are those properties? The only ones you mentioned are the properties of biological sentience, which you had said was not the case of this new sentience.
Favorite Philosopher: Umberto Eco Location: Panama
#468958
Pattern-chaser wrote: October 17th, 2024, 12:01 pm
Sy Borg wrote: October 15th, 2024, 3:31 pm You suggest that I’m jumping the gun with self-replicating AI because it has not yet been realised.
This is just a thought, but is self-replication even relevant? An AI is (currently) just a machine. If you have a supply of parts, then you can build as many as you want to. And the AIs can even control the production line, if we wish.

I think the core concern regarding AIs is when/if we allow them to modify their own code. That means they could change (themselves), and adapt to new circumstances in the world. *That* is the salient point, isn't it? If that happens, then sentience becomes possible, I think? Possible, and uncontrollable by humans...
You have been thrown off because Count needs to contain the issue to human lifetimes for his stance to have any hance of being right I am thinking more about what will happen further in the future.

At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.

At some point, AI will be capable of self-replication. This is certain*.

At some point humans will die out. It is extremely likely that, by that time, AI will be still be operating. At that point AI will be embodied, mobile and vastly more advanced than any units in service today. They will be capable of both self-improvement and self-replication. If any kind of sentience is useful for survival, then AI will develop their own version of it.




*barring an unprecedented catastrophe such as global nuclear war, a chain of supervolcano eruptions, a planet-killer asteroid or a Bulterian jihad style event.
#468960
Lagayscienza wrote: October 17th, 2024, 12:29 am
Count Lucanor wrote: Lacking any evidence, that amounts to an incredible faith in human technical capabilities: if you require it, you'll eventually achieve it, regardless of unsolved current constraints.
But do we really lack ANY evidence? Yes, SRSIMs are still science fiction and will be so for a long time to come. But progress will be made. And it's not as if we haven't already made a start. Just as steam power, internal combustion engines, flying machines, telephones and computers grew out of fundamental scientific research which was then applied to the development of things people imagined and wanted because of their usefulness, so autonomous SRSIMs will be developed because they will be useful tools that will enable us to do things we will want to do, but which we could not do without them.
That's still a lot of faith in human technical capabilities. Just because humans have developed many technologies, does not mean that any technology that you can think of is destined to be developed. How about teletransportation, telekinesis, time travel, traveling at the speed of light? The future cannot be predicted with such confidence as to say this things will happen no matter what. It's pure science fiction with some messianic tints.
Favorite Philosopher: Umberto Eco Location: Panama
#468961
Pattern-chaser wrote: October 17th, 2024, 12:01 pm I think the core concern regarding AIs is when/if we allow them to modify their own code. That means they could change (themselves), and adapt to new circumstances in the world. *That* is the salient point, isn't it? If that happens, then sentience becomes possible, I think? Possible, and uncontrollable by humans...
Even if that was possible, that "sentient" thing will be stuck inside a computer terminal? How will that not be controllable by humans? What will stop them from just unplugging it from the wall outlet? It seems that many believe that all that is required for active agency in the real, physical world, is intelligence (assuming that this thing that computers do is really intelligence).
Favorite Philosopher: Umberto Eco Location: Panama
#468963
Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am Your first error is in thinking that nature can do anything but humans – who you do not class as part of nature – have additional limitations.
That's not what I have said. Humans are obviously a product of nature, but it is by their own nature that humans have created culture, a sort of second nature specific to the species, that differs from the spontaneous, undirected processes of the rest of the natural kingdom, so as to be able to transform the environment to pursuit interests that surpass the instinctive level, and also willfully "reengineer" themselves, which explains why we have a human history full of possibilities, but not a tiger history (at least not an interesting one), because tigers will be doing always the same that their nature dictates. Is literature natural? In a broad sense, yes, but actually, no. So it is with technology.
Of course technology is natural. It’s emergent nature.

The rest of your post supports the position that AI will re-engineer itself. First humans will program it to re-engineer, to improve, itself. From there, the re-engineered AI will continue to re-engineer itself as necessary.


Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am Humans and their creations ARE nature, as much so as trees, mountains, streams and kangaroos. All of these (and us) are the product of the Pale Blue Dot’s tendency towards equilibrium, a push and pull between entropy and negentropy. Humans, like all species, need to be self-interested to survive, which requires the mental (or reflexive) separation of self and environment to survive. It is that mental separation that leads to deem things either “natural” or “artificial”, not because these divisions are real. We may see ourselves as separate to the environment but that that subjective impression does not reflect the physical reality.
But putting all in a big sack called "nature" solves nothing, explains nothing. There is not the sole overarching process of nature to which we resort to explain what is before our eyes, but the many processes of nature, surely all intertwined, but still with their own dynamics and emergent properties. Nature produces many systems and they all work differently when you get down to what's specific to each domain.
Putting everything in a big sack called "nature" counters the common misconception (promoted by you) that humans are not part of nature. This is an error in thinking, caused by the survival-based need to draw a line between oneself and one’s environment. Humanity as a whole does this too. The separation is only true as a subjective impression. In objective reality, AI is as much a part of nature as fungi, volcanoes, and clouds. The notion you are groping for is that human technology is an emergent property of nature, and operates by its domain’s rules rather than general rules.

Still, on a more prosaic level, AI is capable of performing an increasing range of jobs. There is no reason to think that there will be a limit to the kinds of jobs AI will be able to do. It’s a matter of time. AI might or might not be able to do the most complex jobs in our lifetimes, but history does not stop when we die. AI will continue developing, and there are many potentials.

If sentience is a useful property, then in time AI will probably become sentient. Presumably, sentience would emerge when AI is “in the field”, accumulating experience interacting with its environment.




Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am
AI wrote: The advent of self-replicating AI presents numerous potential applications across various sectors. Here’s a detailed exploration of these uses:
1. Tailored AI Solutions for Specific Tasks
Self-replicating AI can create specialized models that are tailored to perform specific tasks in various fields such as healthcare, engineering, and environmental monitoring. For instance, in healthcare, AI could autonomously design models that analyze patient data to provide personalized treatment plans or predict disease outbreaks based on historical data.
2. Enhanced Learning and Adaptation
These AI systems can learn from the successes and failures of their predecessors, allowing them to evolve rapidly. This capability can lead to more efficient development cycles where new models are continuously improved upon without human intervention. For example, in climate science, self-replicating AI could develop models that adapt to changing environmental conditions more swiftly than traditional methods.
3. Automation of Complex Processes
Self-replicating AI could automate complex processes across industries. In manufacturing, for example, it could design and deploy smaller robots capable of performing specific tasks on assembly lines without human oversight. This would not only increase efficiency but also reduce labor costs.
4. Environmental Monitoring and Conservation Efforts
In conservation efforts, self-replicating AI could be deployed to monitor endangered species or track environmental changes autonomously. These systems could analyze vast amounts of data from sensors placed in natural habitats and adjust their monitoring strategies based on real-time findings.
5. Disaster Response and Management
Self-replicating AI could play a crucial role in disaster response by creating models that predict natural disasters or assess damage after an event occurs. These systems could autonomously gather data from affected areas and deploy smaller drones or robots for search-and-rescue missions.
6. Research and Development Acceleration
In research settings, self-replicating AI can significantly accelerate the pace of innovation by generating new hypotheses or experimental designs based on existing knowledge without requiring human input. This capability can lead to breakthroughs in various scientific fields by exploring avenues that may not have been considered by researchers.
7. Ethical Considerations and Governance Models
As self-replicating AI evolves, it will be essential to develop ethical frameworks and governance models to ensure responsible use. This includes establishing guidelines for transparency, accountability, and bias mitigation in the autonomous design processes.
In summary, the potential uses for self-replicating AI span a wide range of applications that promise enhanced efficiency, adaptability, and innovation across multiple sectors while also necessitating careful consideration of ethical implications.
It's hard not to be bored by the inane, impersonal outputs of text engines. They can be a useful tool, I use them, too, but they are not up to the challenge for certain tasks. I had to prohibit them to my students, for no other reason that they ended up spreading lies and misinformation, when not actual nonsense. It's also very easy to fool them and manipulate them just with the text prompts, to get what the user wants them to say. Anyway, I guess that the language engine had to resort to so many "cans" and "coulds" (I counted 13 in just 7 bullet points), given the null concrete evidence of anything actually being implemented. At this moment "Research into self-replication of AI" looks more like some tech guys speaking into their echo chamber about their dreams of a singularity.
Yes, AI is boring. Then again, your stick-in-the-mud truisms are boring too. I also can induce snores. None of it matters.

When speaking about the future, the words “could” and “can” are obviously more appropriate than “is” and “will”. That a tick for AI, not a cross.

Re: the point I was making with the AI quote - that there are commercial and political pressures that drive research into AI self-replication and self-improvement. Resources are going into these areas. These are not blue skies projects like interstellar travel, devising a TOE or the birth of the universe. Self-replication and self-improvement are clearly realisable potentials for AI systems.

Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am It is clear that self-improving AI would be extremely useful too.
No, it's not clear at all. A text engine saying it doesn't make it true.
Glib.

If AI can create better AIs than humans can, then that is a competitive advantage.


Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am It’s just a matter of putting the two technologies together – self-replication and self-improvement. This would be invaluable for space exploration and mining.
Two technologies that don't exist.
Sy Borg wrote: October 17th, 2024, 12:00 am Once these units are in the field and allowed to evolve as they see fit, they will be reacting to environmental and logistical pressures. If this occurs over deep time, even after humans themselves have gone extinct, it’s hard to imagine that they will always remain non-sentient.
That's just what the tech guys dream of, but they still have to figure out how it is done, and nobody knows if it can be done.
Of course, no one knows if it can be done. That’s why the question was posed. No one knows.

Resorting to ad homs against “tech guys” only weakens your position, suggesting that you need to resort to gimmicks.

How complex do entities and interactions need to be for sentience to be necessary?


Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am Remember, sentience is not something that suddenly happens. Consider your own journey from embryo to adult. You can’t remember becoming sentient. There would have been a gradual dawning, like a light gradually lighting up everything around you in a world that had previously been pitch black.

Generally, attributes that are useful to persistence (accidentally or otherwise) will emerge. Like I say, AI might choose not to be sentient, seeing it as a handicap. While sentience ostensibly serves biology well, hence the plethora of sentient species, it may not be as useful to entities that calculate probabilities a million times more quickly than humans do.

Emotions emerged because organisms needed to be able to react to events more quickly than they could think them through. To that end, emotions are like subroutines that are called when certain variables appear.

AI, on the other hand, can calculate the steps needed to meet challenges quickly enough not to need the sentience “cheat code”.

Then again, that’s assuming a human standard of engagement. If self-replicating AIs are in a situation where they compete with others for resources or information, the ones that act the most quickly and accurately will proliferate more than their competition.

If such entities become sentient, with new subroutines designed to speed up processing in order to out-compete other entities, it’s possible that their kind of sentience would be too fast for us to detect it as sentience. To us, their rapid-fire calculations leading to interactions (or non-interactions) would seem to be indecipherable activities.
Lots of wishful thinking in hypothetical scenarios. Let's cut to the chase: I'm still waiting for an answer to my question. If there's a property called sentience that applies to both non-living and living beings, what is it? What are those properties? The only ones you mentioned are the properties of biological sentience, which you had said was not the case of this new sentience.
Again, repeating this “wishful thinking” gimmick is not convincing debating. It suggests that you are just brushing off inconvenient points rather than exerting the mental energy needed to engage.

As for the “what is sentience” – that is basically the question I was addressing in the above post. In brief, it is internality. I feels like something to be sentient. The difference between us and philosophical zombies, the latter being the current state of AI.

I say that AI will probably not always be philosophical zombies.
You say AI will always be philosophical zombies.

That’s the disagreement in a nutshell.

It’s actually up to you to explain why carboniferous hydrous forms can be the only possible housing for sentience.
#468968
Sy Borg wrote: October 17th, 2024, 6:30 pm
Count Lucanor wrote: October 17th, 2024, 12:10 pm
Sy Borg wrote: October 17th, 2024, 12:00 am Your first error is in thinking that nature can do anything but humans – who you do not class as part of nature – have additional limitations.
That's not what I have said. Humans are obviously a product of nature, but it is by their own nature that humans have created culture, a sort of second nature specific to the species, that differs from the spontaneous, undirected processes of the rest of the natural kingdom, so as to be able to transform the environment to pursuit interests that surpass the instinctive level, and also willfully "reengineer" themselves, which explains why we have a human history full of possibilities, but not a tiger history (at least not an interesting one), because tigers will be doing always the same that their nature dictates. Is literature natural? In a broad sense, yes, but actually, no. So it is with technology.
Of course technology is natural. It’s emergent nature.
Culture is the emergent nature, and technology is a new relationship between a group of living beings and the environment made possible by culture. Technology will not just appear spontaneously in nature.
Sy Borg wrote: October 17th, 2024, 6:30 pm
The rest of your post supports the position that AI will re-engineer itself. First humans will program it to re-engineer, to improve, itself. From there, the re-engineered AI will continue to re-engineer itself as necessary.
No, it doesn’t support that, unless you misread it. AI cannot reengineer itself to the extent of having human-like capabilities, if it doesn’t have the powers that human have to modify their natural and social conditions.
Sy Borg wrote: October 17th, 2024, 6:30 pm
Putting everything in a big sack called "nature" counters the common misconception (promoted by you) that humans are not part of nature. This is an error in thinking…
That’s just a plain straw man fallacy, with the aggravating factor that the straw man that you built is just millimeters away from the real thing, that is, from my statement saying exactly the opposite of what you say I’m saying.
Sy Borg wrote: October 17th, 2024, 6:30 pm Still, on a more prosaic level, AI is capable of performing an increasing range of jobs. There is no reason to think that there will be a limit to the kinds of jobs AI will be able to do. It’s a matter of time. AI might or might not be able to do the most complex jobs in our lifetimes, but history does not stop when we die. AI will continue developing, and there are many potentials.
Here it is, your disproportioned optimism, for not calling it faith, in the limitless possibilities of human genius to develop any technology they can think of. There are plenty of reasons to believe that humans don’t have infinite powers. We do have evidence of technology surpassing human capabilities and outperforming us in many tasks, that’s as old as the first arrow, but the key point is that whatever we achieve with technology, it will always be instrumental to humans, controlled by humans. The rest is science fiction.
Sy Borg wrote: October 17th, 2024, 6:30 pm If sentience is a useful property, then in time AI will probably become sentient. Presumably, sentience would emerge when AI is “in the field”, accumulating experience interacting with its environment.
There’s no concrete evidence that it can become that, either helped by humans or spontaneously. The science is not there. I’m still waiting for a response to what is non-biological sentience.
Sy Borg wrote: October 17th, 2024, 6:30 pm
When speaking about the future, the words “could” and “can” are obviously more appropriate than “is” and “will”. That a tick for AI, not a cross.
You mean when speaking about highly hypothetical, speculative scenarios, not very much dealing with concrete developments and their real potential.
Sy Borg wrote: October 17th, 2024, 6:30 pm
How complex do entities and interactions need to be for sentience to be necessary?
Supposedly, all those bought into the AI hype know the answer to that question, but such pretensions never pass the phase of speculation derived from unwarranted assumptions, driven more by passionate enthusiasm for sci-fi scenarios than by critical thinking.
Sy Borg wrote: October 17th, 2024, 12:00 am AI might choose not to be sentient, seeing it as a handicap.
That makes no sense. An entity that chooses is already an entity that evaluates actions and consequences, that is, is sentient. Therefore it cannot choose not being itself. This shows a fundamental flaw in the thinking of the advocates of the proposition that computers can reach autonomy and agency by mere algorithmic processing. My digital calculator, however, has zero understanding of math. It will not have it no matter how much computational power you give to it, it simply cannot reason.
Sy Borg wrote: October 17th, 2024, 6:30 pm As for the “what is sentience” – that is basically the question I was addressing in the above post. In brief, it is internality. I feels like something to be sentient. The difference between us and philosophical zombies, the latter being the current state of AI.

I say that AI will probably not always be philosophical zombies.
You say AI will always be philosophical zombies.

That’s the disagreement in a nutshell.
You’re not addressing the issue. “Feeling like something internally” still points to the type of sentience of living beings, so you’re not really building a good case for your proposition that there could be a non-biological sentience, because you can’t even say what that is and why it still should be called sentience.
Sy Borg wrote: October 17th, 2024, 6:30 pm It’s actually up to you to explain why carboniferous hydrous forms can be the only possible housing for sentience.
That’s an ad ignorantiam fallacy. I don’t have the burden of proof of why things cannot be different of how they really are. Things are how they are and if you find evidence on the contrary, then it’s up to you to present it. Also, it’s up to you to explain how that other sentience that is not biological sentience is, and why you need to still call it sentience.
Favorite Philosopher: Umberto Eco Location: Panama
#468969
You seem to just be arguing for arguments' sake, like this is a school debate. Your pulling so many statements out of context has distorted their meaning, and it's forbidding and disjointed for people to read. Just a "he said, she said" argument rather than an exploration of ideas.

You are the one making absolutist claims - that AI will never be sentient, while my claim is that it is possible. Thus, it is actually up to you to explain why something not made from flesh can never be sentient. Come on, add something creative to the conversation rather than just naysaying and sprouting orthodoxies in the most the most bland and uninpsiring ways.

Reality is mind-blowing and its potentials are unimaginable. The Earth is utterly extraordinary. However, evolution is uncaring, brutal. The entities best suited to a place and time are the ones that persist. The rest die out.

To that end, AI is obviously vastly better suited to space exploration than humans. The further AI explores, the less it can rely on programming or updated instructions. AI will have to make real time decisions, to adapt, to learn, to improve itself (just as humans had improved it), and it will need to replicate - to create helpers.

How deep that rabbit hole goes over deep time, it's impossible to say. Human intelligence will be superseded. It's not as though humanity behaves in a way that suggests it's the pinnacle of intelligence, as good as reality can get. If so, then Benetar makes sense.
#468979
Sy Borg wrote: October 17th, 2024, 4:10 pm At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.

At some point, AI will be capable of self-replication. This is certain*.
Not "certain", I suggest, but merely possible? For example, AIs have not yet been programmed to have the capability of replication. That they might be so programmed in the future makes your speculation possible, but not inevitable?
Favorite Philosopher: Cratylus Location: England
#468980
Each of us knows what sentience feels like, and we currently understand a bit about how it is produced. The most fundamental requirements are a suitably wired brain and nervous system and a body to house and sustain them. We don’t know whether sentience, as we know it personally, can be produced in a non-organic substrate, but there seems to be no reason in principle why it could not. Computers can already “sense” all sorts of things, if by “sense”, we mean register and respond to things external to themselves in their physical environment. For example, non-biological machines can already sense temperature, light, movement, speed, etc. And they can do these things better than us bags of biology. They can also sense when their batteries are running low and alert us to their need for a recharge. All of this sensing makes autopilots, drones, self-driving cars, autonomous vacuum cleaners, etc. possible. And we are only just beginning down the path of inventing intelligent machines that interact with their environment. The ability of machines to sense and respond in more complex ways will increase as we are able to further complexify and miniaturize electronics.

I think people get hung up on the philosophical chestnut of qualia - on the “what it feels like” problem. The problem is that we can never know for sure what it feels like to be anyone, or anything, except ourselves or, for that matter, whether others “feel” anything at all. Everyone else might be philosophical zombies. However, since we have very similar bodies, brains and sensoria, and since our behaviours are similar in many situations, we are entitled to reject solipsism and infer that others are sentient like us and “feel” more or less like we do.

The only way we can ever know if someone or some thing is “sensing’ or “feeling” anything is to observe their behavior, to look at neural activity using fMRI or other technology, and, if they can communicate, to ask them. If we can invent non-biological brains we could, presumably, find out in the same way whether they are “feeling” anything. If a robot were to report that it felt a bit low on energy due to the time since its last battery recharge, what is to stop us from taking this report at face value, just as we would if a human told us that they were feeling a bit low on energy because they hadn’t eaten for several days? I think the question of whether a robot is capable of actually “feeling” anything is not all that important. Why should only biological machines like us be capable of “feeling” and “sensing”? Why not non-biological machines? Why are forced to take a biocentric view?

The nervous system of the species of nematode called C. elegans contains only 302 neurons. With these it senses its environment and finds food to sustain itself and power reproduction. In a 2023 paper published in Nature the authors demonstrate

“how the nervous system of C. elegans can be modelled and simulated with data-driven models using different neural network architectures. Specifically, we target the use of state-of-the-art recurrent neural network architectures such as Long Short-Term Memory and Gated Recurrent Units and compare these architectures in terms of their properties and their accuracy (Root Mean Square Error), as well as the complexity of the resulting models. We show that Gated Recurrent Unit models with a hidden layer size of 4 are able to accurately reproduce the system response to very different stimuli. We furthermore explore the relative importance of their inputs as well as scalability to more scenarios.”

“The fact that we are able to replace a complex biophysical model with a simpler recurrent neural network with few neurons also means the interpretability improves (helps to better understand the modelled neural circuits).

(This paper, Learning the dynamics of realistic models of C. elegans nervous system with recurrent neural networks, is available online for free at Nature.)

The evidence to hand suggests that reproducing the workings of biological neural networks and sentience in non-biological substrates is not impossible. If that this is so, then It is not clear that we are forced to conclude that sentience is only possible in biological machines. And, if sentience and intelligence are not limited to biological machines, then maybe the sky's the limit for autonomous, self reproducing, self-improving non-biological machines that we may eventually send out to explore the galaxy. They will be sentient and intelligent and capable of evolving. And saying that they won't really be "life" will not worth saying.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#468982
Sy Borg wrote: October 18th, 2024, 4:41 am You seem to just be arguing for arguments' sake, like this is a school debate. Your pulling so many statements out of context has distorted their meaning, and it's forbidding and disjointed for people to read. Just a "he said, she said" argument rather than an exploration of ideas.

You are the one making absolutist claims - that AI will never be sentient, while my claim is that it is possible. Thus, it is actually up to you to explain why something not made from flesh can never be sentient. Come on, add something creative to the conversation rather than just naysaying and sprouting orthodoxies in the most the most bland and uninpsiring ways.

Reality is mind-blowing and its potentials are unimaginable. The Earth is utterly extraordinary. However, evolution is uncaring, brutal. The entities best suited to a place and time are the ones that persist. The rest die out.

To that end, AI is obviously vastly better suited to space exploration than humans. The further AI explores, the less it can rely on programming or updated instructions. AI will have to make real time decisions, to adapt, to learn, to improve itself (just as humans had improved it), and it will need to replicate - to create helpers.

How deep that rabbit hole goes over deep time, it's impossible to say. Human intelligence will be superseded. It's not as though humanity behaves in a way that suggests it's the pinnacle of intelligence, as good as reality can get. If so, then Benetar makes sense.
I'm not discussing my alleged motivations, nor the whole tirade of ad hominems (highlighted in red). I can only say that it's common knowledge that resorting to ad hominems is the clearest sign that someone is losing an argument.

About the straw man argument (highlighted in blue), I have already clarified my positions, but the straw man keeps coming back. What can one do.

About the very little that is left, I will say: you're ambivalent on whether the future can be predicted with precision or not, since in the same sentence you say "how deep that rabbit hole goes over deep time, it's impossible to say" and then go on to assert that something is certain to happen: human intelligence will be superseded. You have also said that self-improving self-replicating machines will be made, autonomous agency will emerge from non-living things, etc. My stance on this is quite clear: we cannot know what humans will achieve in technological development, including the highly sophisticated machines envisioned by proponents of the technological singularity. Just to be clear: I believe that most if not all talk about AI in this forum and mainstream media is ultimately nourished by the singularity hypothesis, which goes as follows:

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. (Wikipedia)
My charges against this hypothesis are:

1) What is called artificial intelligence rests on the assumption that minds are biological computers, so one should be able to recreate minds in highly sophisticated computers, but these assumptions are wrong. It's been a long debate since Turing, but I'm confident where I stand. There's a direct relationship between proponents of AI-as-real intelligence and the singularity hypothesis.

2) Machines are lifeless, non-sentient. The assumption from proponents of AI-as-real-intelligence (also the singularity hypothesis) is that the more sophisticated the computers, the more "intelligent" they get, the closer to becoming life-emulating, sentient entities. The conceptual base is that life and sentience are emergent properties of intelligence. I say this is nonsense.

3) Proponents of AI-as-real-intelligence (also the singularity hypothesis) believe that Generative AI and LLM (Large Language Models) are the holy grail of human-like computer intelligence, getting us closer to machines becoming life-emulating, sentient entities. Because this is not real intelligence, nor real life, nor real sentience, I say this is nonsense. It has been demonstrated that these models still can't think, reason, nor have interests, etc. They cannot have interests because they don't have any "feeling" apparatus.

4) Technological growth is the result of human action and the nature of technology itself is instrumentality, it is a tool invented and used by humans. Its growth is not a "natural growth" outside of human society. It is very unlikely that spontaneously, without direct human intervention, the products of human technology become uncontrollable by ceasing to be instrumental, becoming agents on their own.

5) Even in the highly unlikely scenario that humans managed to create life-emulating intelligent machines, to be agents on their own, pursuing their own interests, it would imply that they are constituted as a new race or class of entities with the power of social action. If such sci-fi scenario was possible, indeed it would bring unforeseeable consequences for human civilization, as the singularity hypothesis predicts, but that new history would be entirely undetermined and contingent, just as human history is right now.
Favorite Philosopher: Umberto Eco Location: Panama
#468983
Plato’s cave metaphor is added to the understanding of the cognitive trap of ignorance. There is an article in the MIT press citing from Daniel DeNicola’s book: ‘Understanding ignorance; the surprising impact of what we don’t know’. The cave is delimited by the voices of the various keepers of the cave and the unsuspecting dwellers had never seen the light and no idea of anything else and did not care” the book named this concept as: “the comfort of ignorance” Perhaps Plato in the famous cave metaphor was putting Socrates as a dweller and not as a keeper for if he was a keeper then the dwellers know less than nothing… IMO, he was a keeper in what it is (modern) an influencer with a comforting method.
#468985
Count Lucanor wrote: October 18th, 2024, 11:28 am
Sy Borg wrote: October 18th, 2024, 4:41 am You seem to just be arguing for arguments' sake, like this is a school debate. Your pulling so many statements out of context has distorted their meaning, and it's forbidding and disjointed for people to read. Just a "he said, she said" argument rather than an exploration of ideas.

You are the one making absolutist claims - that AI will never be sentient, while my claim is that it is possible. Thus, it is actually up to you to explain why something not made from flesh can never be sentient. Come on, add something creative to the conversation rather than just naysaying and sprouting orthodoxies in the most the most bland and uninpsiring ways.

Reality is mind-blowing and its potentials are unimaginable. The Earth is utterly extraordinary. However, evolution is uncaring, brutal. The entities best suited to a place and time are the ones that persist. The rest die out.

To that end, AI is obviously vastly better suited to space exploration than humans. The further AI explores, the less it can rely on programming or updated instructions. AI will have to make real time decisions, to adapt, to learn, to improve itself (just as humans had improved it), and it will need to replicate - to create helpers.

How deep that rabbit hole goes over deep time, it's impossible to say. Human intelligence will be superseded. It's not as though humanity behaves in a way that suggests it's the pinnacle of intelligence, as good as reality can get. If so, then Benetar makes sense.
I'm not discussing my alleged motivations, nor the whole tirade of ad hominems (highlighted in red). I can only say that it's common knowledge that resorting to ad hominems is the clearest sign that someone is losing an argument.

About the straw man argument (highlighted in blue), I have already clarified my positions, but the straw man keeps coming back. What can one do.

1) What is called artificial intelligence rests on the assumption that minds are biological computers, so one should be able to recreate minds in highly sophisticated computers, but these assumptions are wrong. It's been a long debate since Turing, but I'm confident where I stand. There's a direct relationship between proponents of AI-as-real intelligence and the singularity hypothesis.

2) Machines are lifeless, non-sentient. The assumption from proponents of AI-as-real-intelligence (also the singularity hypothesis) is that the more sophisticated the computers, the more "intelligent" they get, the closer to becoming life-emulating, sentient entities. The conceptual base is that life and sentience are emergent properties of intelligence. I say this is nonsense.

3) Proponents of AI-as-real-intelligence (also the singularity hypothesis) believe that Generative AI and LLM (Large Language Models) are the holy grail of human-like computer intelligence, getting us closer to machines becoming life-emulating, sentient entities. Because this is not real intelligence, nor real life, nor real sentience, I say this is nonsense. It has been demonstrated that these models still can't think, reason, nor have interests, etc. They cannot have interests because they don't have any "feeling" apparatus.

4) Technological growth is the result of human action and the nature of technology itself is instrumentality, it is a tool invented and used by humans. Its growth is not a "natural growth" outside of human society. It is very unlikely that spontaneously, without direct human intervention, the products of human technology become uncontrollable by ceasing to be instrumental, becoming agents on their own.

5) Even in the highly unlikely scenario that humans managed to create life-emulating intelligent machines, to be agents on their own, pursuing their own interests, it would imply that they are constituted as a new race or class of entities with the power of social action. If such sci-fi scenario was possible, indeed it would bring unforeseeable consequences for human civilization, as the singularity hypothesis predicts, but that new history would be entirely undetermined and contingent, just as human history is right now.
I already know your motivation. I anticipated that that you would rush to declare this as an ad hominem attack and cynically use that to declare "victory". It's game that I've seen you play before multiple times.

You destroy the context of conversations by breaking them into pieces and acting as if it's competitive sport – a clash of egos. It would be refreshing if you could explore and share ideas with goodwill and respect rather than reducing everything to a p1ssing contest.

This whole debate I have been productive and positive, while you have been negative and constantly used subtle ad hominems and straw men, always aiming to present me as either a crackpot “AI enthusiast” or a religious dualist, despite knowing that neither is the case. Most times I let your digs slide or gently divert. You didn’t notice and kept on hammering with little accusations, over and over. When I pointed out your multiple errors, you snipped them out.

None of your arguments above hold water. Word salads lacking in depth.

1) Straw man. No one here is saying brains are biological computers. Not all phenomena in nature happen the same way. Sometimes there are catalysts.

2) Mistaken assumptions. Irrelevant rambling, unrelated to my arguments.

3) See above. Irrelevant rambling, unrelated to my arguments.

4) Shallow. Ignores that some emergence is induced by catalysts.

5) Short-sighted, off-track with wild speculations, ignores context, and is unrelated to my arguments.


It is obvious that AI has potentials not seen in any other technology. None of your arguments against this hold water. They are simply denial.

Further, you cannot explain why it is only possible (in your mind) for sentience to exist in watery, carboniferous entities, and chose to play games to avoid answering.
#468986
Pattern-chaser wrote: October 18th, 2024, 7:45 am
Sy Borg wrote: October 17th, 2024, 4:10 pm At some point, AI will be capable of creating better AI than humans can, and to train it more effectively. This is certain*.

At some point, AI will be capable of self-replication. This is certain*.
Not "certain", I suggest, but merely possible? For example, AIs have not yet been programmed to have the capability of replication. That they might be so programmed in the future makes your speculation possible, but not inevitable?
AIs are already programming better than a percentage of programmers.
1. In early 2023, OpenAI’s ChatGPT passed Google’s exam for high-level software developers.

2. Later in 2023, GitHub reported that 46% of code across all programming languages is built using Copilot, the company’s AI-powered developer tool.

3. Finally, DeepMind’s AlphaCode in its debut outperformed human programmers. When pitted against over 5,000 human participants, the AI beat 45% of expert programmers.
https://peterhdiamandis.medium.com/will ... 79a8ac4279

Do you think that AI will stop progressing, even though its progress has so far been exponential?
#468991
Count Lucanor wrote: October 18th, 2024, 11:28 am ... believe that most if not all talk about AI in this forum and mainstream media is ultimately nourished by the singularity hypothesis, which goes as follows:

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. (Wikipedia)
But, Count Lucanor, what if the singularity hypothesis is not the hypothesis that is argued for? I want to argue only that there is no reason to believe that sentience and intelligence can only be housed in biological organisms. The so called “singularity” might be possible – I’m unsure about that, but it is not what I argue for.
Count Lucanor wrote: October 18th, 2024, 11:28 amMy charges against this hypothesis are:
1) What is called artificial intelligence rests on the assumption that minds are biological computers, so one should be able to recreate minds in highly sophisticated computers, but these assumptions are wrong. It's been a long debate since Turing, but I'm confident where I stand. There's a direct relationship between proponents of AI-as-real intelligence and the singularity hypothesis.
What prevents you from seeing brains as biological computers? You say that you are confident that brains are not biological computers. What gives you this confidence? Could you explain why you believe that brains cannot be made of inorganic materials? If it were possible that structures made from inorganic materials could house brain-like processes, what would prevent you from entertaining the idea that minds could emerge from these brain-like structures?
Count Lucanor wrote: October 18th, 2024, 11:28 am2) Machines are lifeless, non-sentient. The assumption from proponents of AI-as-real-intelligence (also the singularity hypothesis) is that the more sophisticated the computers, the more "intelligent" they get, the closer to becoming life-emulating, sentient entities. The conceptual base is that life and sentience are emergent properties of intelligence. I say this is nonsense.
You say above that the conceptual base of AI proponents is that “life and sentience are emergent properties of intelligence”. But that is Idealism and not my assumption. Rather, I think sentence and intelligence have been emergent properties of life, but that it’s hard to see why sentience and intelligence must only be associated with the biological processes of organic life?
Count Lucanor wrote: October 18th, 2024, 11:28 am3) Proponents of AI-as-real-intelligence (also the singularity hypothesis) believe that Generative AI and LLM (Large Language Models) are the holy grail of human-like computer intelligence, getting us closer to machines becoming life-emulating, sentient entities. Because this is not real intelligence, nor real life, nor real sentience, I say this is nonsense. It has been demonstrated that these models still can't think, reason, nor have interests, etc. They cannot have interests because they don't have any "feeling" apparatus.
I don’t believe the current crop of LLMs are sentient, or that they have interests. However, they certainly have abilities we associate with intelligence. These abilities, and our understanding of neural networks, seem to me like a humble start on the road to eventually building brain-like structures that perform similarly to organic brains.
Count Lucanor wrote: October 18th, 2024, 11:28 am4) Technological growth is the result of human action and the nature of technology itself is instrumentality, it is a tool invented and used by humans. Its growth is not a "natural growth" outside of human society. It is very unlikely that spontaneously, without direct human intervention, the products of human technology become uncontrollable by ceasing to be instrumental, becoming agents on their own.
I agree that at present it is unlikely. But, down the road, is it impossible in prinicple?
Count Lucanor wrote: October 18th, 2024, 11:28 am5) Even in the highly unlikely scenario that humans managed to create life-emulating intelligent machines, to be agents on their own, pursuing their own interests, it would imply that they are constituted as a new race or class of entities with the power of social action. If such sci-fi scenario was possible, indeed it would bring unforeseeable consequences for human civilization, as the singularity hypothesis predicts, but that new history would be entirely undetermined and contingent, just as human history is right now.
Right. Their future would be undetermined and contingent. But does that make it impossible? We inhabit a deterministic universe in which contingent processes such as evolution by natural selection unfold. Why should we think that such processes are only possible for organisms like us? Why, in deep time, could evolution of some form not play a part in the development of autonomous, self-replicating machines that we build and send out to explore and colonize the galaxy?

Apologies for all the questions. It's just that I'm trying to better understand your position.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
  • 1
  • 6
  • 7
  • 8
  • 9
  • 10
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

Right. “What are the choices? Grin, bear it, issue[…]

Emergence can't do that!!

I'm woefully ignorant about the scientific techn[…]

Q. What happens to a large country that stops gath[…]

How do I apply with you for the review job involve[…]