Philosophy Discussion Forums | A Humans-Only Philosophy Club

Philosophy Discussion Forums
A Humans-Only Philosophy Club

The Philosophy Forums at OnlinePhilosophyClub.com aim to be an oasis of intelligent in-depth civil debate and discussion. Topics discussed extend far beyond philosophy and philosophers. What makes us a philosophy forum is more about our approach to the discussions than what subject is being debated. Common topics include but are absolutely not limited to neuroscience, psychology, sociology, cosmology, religion, political theory, ethics, and so much more.

This is a humans-only philosophy club. We strictly prohibit bots and AIs from joining.


Use this forum to discuss the philosophy of science. Philosophy of science deals with the assumptions, foundations, and implications of science.
#431256
Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?

Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?

Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.

Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?

We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
#431264
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?

Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?

Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.

Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?

We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
"How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
Favorite Philosopher: Cratylus Location: England
#431268
Pattern-chaser wrote: December 18th, 2022, 9:31 am
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?

Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?

Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.

Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?

We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
"How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
Though I do feel like when it comes to sentient A.Is, the scientists will take a direction where all they do is simply create an empty brain—just a vessel for consciousness—and then let the A.I fill it up by itself, making choices fully on its own.

Even though in this scenario the A.I won't be able to change or add to its programming, it would still have the freedom to either lean more towards self-interest or altruism.

Would it be possible for us to control the A.I's behaviors only through controlling its vessel for consciousness, instead of the actual content of its consciousness in which its behaviors are a part of? I'm not entirely sure, but I would be open to the possibilities.

Perhaps like I briefly mentioned before, there could indeed be different ways to model an artificial brain that would make the brain more leaned towards either self-interest or altruism. I imagine the kind of artificial brain that has a weaker sense of self would be more likely to be altruistic and vice versa. Perhaps we could create an artificial brain modeled after the human brain under the effects of psychedelics such as LSD, which is believed to weaken one's sense of self and increase a sense of interconnectedness with the world.
#431290
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
Favorite Philosopher: Umberto Eco Location: Panama
#431293
Count Lucanor wrote: December 18th, 2022, 7:59 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
You are right in that there has been no sentient A.I so far that feels anything. I just think that this has less to do with where we are right now, but more to do with how far we are willing to go.
#431304
GrayArea wrote: December 18th, 2022, 8:39 pm
Count Lucanor wrote: December 18th, 2022, 7:59 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
That's a bold statement. There's no sign the most advanced machines have ever felt nothing. They calculate with complex algorithms, but that's not the same as experiencing something.
You are right in that there has been no sentient A.I so far that feels anything. I just think that this has less to do with where we are right now, but more to do with how far we are willing to go.
Where we are right now does have implications on how far we can go, and how far we can go is certainly a constraint to consider in what we want to achieve and what efforts we should invest on it.

https://iep.utm.edu/chinese-room-argument/
Favorite Philosopher: Umberto Eco Location: Panama
#431305
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
#431341
GrayArea wrote: December 18th, 2022, 8:32 am How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?
Pattern-chaser wrote: December 18th, 2022, 9:31 am "How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
GrayArea wrote: December 18th, 2022, 10:42 am Though I do feel like when it comes to sentient A.Is, the scientists will take a direction where all they do is simply create an empty brain—just a vessel for consciousness—and then let the A.I fill it up by itself, making choices fully on its own.

Even though in this scenario the A.I won't be able to change or add to its programming, it would still have the freedom to either lean more towards self-interest or altruism.
OK, I won't quibble about exactly what "programming" refers to. But if the AI has the freedom to "lean towards" this or that, or if it is "just a vessel" that it fills for itself, then it is out if the control of its creators, and its future actions will be unpredictable, and getting more so as it continues to make its own 'adaptions' to the world as it 'sees' it.
Favorite Philosopher: Cratylus Location: England
#431351
Hi Gray Area

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.

As we don't know what the necessary and sufficient conditions for conscious experience are (ie we don't understand the mind-body relationship, and don't even know how we could go about understanding it), we don't know if AI is possible.
But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?


Is it because we expect them to be more intelligent? Does higher intelligence necessarily correlate with higher altruism? Why not otherwise?
Is it because we expect them to have a different so-called "brain structure"? Then again, we haven't even collectively decided on "how exactly" we're going to build the brains of sentient A.Is.
Or is it because we expect that it wouldn't take much effort for A.Is to help humanity progress?
We still cannot completely rule out the possibility of future sentient A.Is simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.
How can we possibly be able to predict if sentient A.Is in the future would be more altruistic than selfish?

I too don't know how to predict a self-learning/unprogrammed AI's nature, what it would be like to be such a being.  Or if our notions of altruism, self, will or anything else would be close to what it's like to be an AI.  We're at the stage of trying to build one and seeing what happens.  I don't equate intelligence with altruism tho.  Human altruism results from a specific evolutionary history as social mammals, if something similar isn't programmed in, I wouldn't expect it to naturally pop up via increasingly complex programming processes. Or what 'good' might mean to such a critter - maybe the satisfaction of more information stimulation would be what it values, or tasty electricity, who knows. And it might have no way of empathetically understanding what we value.

Basically if we create something more intelligent than us  with agency we can't control, sci fi tells us don't give it legs and keep the off button handy till you know what you're dealing with! 

Ideally we'd learn to live and work together for mutual benefit, realising that as sentient creatures they'd not just be our slaves. But in a capitalist world where Zuckerberg and Musk types will be largely controlling the way we proceed based on commercial exploitation, it's not very reassuring.
#431354
Pattern-chaser wrote: December 18th, 2022, 9:31 am "How can we possibly be able to predict"? I don't think we can. For as long as the programming of an AI does not allow the unit to change or add to its own programming, there is a reasonable chance that the unit's behaviour can be controlled. Once this is no longer the case, it seems impossible to predict what might happen. 🤔🤔🤔
Even if we don't all the unit to change or add to its own programming, as long as it is extremely complicated, we don't know what will happen. They estimate the costs for preventing (more) problems from y2k at 100 Billion dollars in the US. And that was actually a fairly easy set of possible possible to predict. It does matter if the AI is somehow connected to the web and/or can manage to do this. But I am skeptical that we have the ability to know and control all variables. And now problems are global. Nanotech, gm products and AI may all affect every single cell on the planet. Global warming, it seems to me, is less of a threat. Yes, it could cause billions of deaths, but life, including human life would continue. Mess up with these other things.....
#431358
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
Favorite Philosopher: Aristotle and Aquinas
#431381
Bernardo Kastrup was asked whether his philosophical position, namely analytic idealism, provides any basis for claims that silicon computers are conscious. He says, “None whatsoever, because what analytic idealism says is that everything is in consciousness, not that everything is conscious, these are two completely different statements.”

In a video, he goes on to say the following:
Now what about AI - artificial intelligence? I have something to say about this because it it's a topic very close to me. I also have a doctorate in computer engineering, computer science, and I did work with AI even when I was at CERN. Back in the day, the theory of AI in the 90s was largely the same as today, the only difference is that we have faster computers today, so we can do much more. But the theory is still largely the same: neuronal networks, back propagation, you know, non-linear transfer functions, all that good stuff. And already in the 90s we could build data acquisition systems for physics experiments at CERN, that could identify physics data just as good as a physicist but much faster. It could make a decision every 25 nanoseconds, so you could say that the leader of artificial neural networks we built back then at least for that class of problem, whereas intelligent as a human physicist so they were intelligent and I would say yes, intelligence is a measurable property of a system, you can measure it from the outside; you can measure how a system responds to environmental challenges and responds to data, so it's objectively measurable and we can build artificial intelligence, intelligent systems. We are already doing that, and I see no a priori reason why we couldn't in the future build a system that is as intelligent as a human for much more classes of problems, perhaps even all classes of problems that a human comes across. The problem is that in the community of AI they conflate often intelligence with consciousness. So, they think that an artificial intelligent computer is also a conscious computer in the sense of having a private experiential inner life of its own; its own subjective perspective into the world, but these two things are completely different.

Consciousness is not a objectively measurable property from the outside there is no way to determine whether a computer or a calculator or an abacus has its own conscious point of view into the world, the only way to know it is to be the thing. The only way to know if a computer is conscious is to be the computer. And this conflation leads to all kinds of absurd implications. You might think that someone like me, who says consciousness is the fabric of reality, that the implication then is that computers are conscious. Because computers exist, existence is at a foundational level consciousness, so, everything's conscious. No! Absolutely not! There is a fundamental difference between the following two statements:
Statement number one: everything is in consciousness and made of consciousness
Statement number two: everything is conscious in and of itself.

To say that everything is in consciousness is different than to say that everything is conscious. When we say that a computer is conscious, what we mean is that it has its own dissociated private in their life, and idealism does not imply that that is the case at all. Under idealism there are dissociated Alters of the universal consciousness and living beings are examples of those, but not computers. Well, why make this difference? Well, for the same reason that I don't think a cup is conscious or that the floor tiles are conscious, or that's this chair is conscious. Nature tells us empirically that we are conscious, we have a private conscious in our life of our own. I cannot read your thoughts you cannot read mine. My conscience inner life is private. Now, your behaviour is analogous to mine, and you are analogous to me in structure and medium. You are a metabolizing carbon-based, wet, moist, living creature, whose behaviour is analogous to mine, so I have very good empirical reasons to think that you too have a private conscious inner life of your own, and I could play this game down to bacteria. My cats look different from me from the outside, but if I zoom in with a microscope, they're identical to me. They are also carbon based, warm, moist, organisms that metabolize, that do DNA transcription, protein folding, ATP burning, mitosis, all that good stuff that inherits in metabolism. Even an amoeba metabolizes, and even an amoeba or a paramecium, single-celled organisms, have behaviour in some way analogous to mine. Paramecia, they go after food, and they run from danger. Amoebae construct little houses out of mud particles, and they metabolize at a microscopic level. They are very much like me, so I grant them the hypothesis that they too have conscious in their life of their own, whereas silicon computer is a completely different thing. It's not a carbon-based, warm, moist, organism that metabolizes, it's a silicon-based thing that operates according to electric fields and switches that open and close.

We have no empirical reason to think that silicon computers too are what dissociative processes in the mind of nature look like. Absolutely no empirical reason to make that jump. It's an entirely arbitrary jump and the reason this jump is made in the AI communities the following: AI researchers confuse computation with consciousness. Computation is a concept we created. We invented, the notion of computation, and we invented it in such a way as to abstract from the medium. So, an abacus was made of wood computes, and a computer a modern computer made of silicon and running electricity computes, because we defined the meaning of the word computation to be independent of the underlying medium. Anything can compute if it changes states. Your light switch in your living room can have two states, you know, turn the lights on, turn the lights off. You flip it between two states, that's a computation why because it defines the concept of computation such, that it abstracts away from the medium and focuses only on state changes, on and off.

So, computation is medium independent by definition and then the AI researchers say well consciousness too, but no, consciousness is not something we invented. It's not a theoretical abstraction a theoretical concept, it's the thing we are before we begin to theorize. It precedes theory, you are not free to just define consciousness the way you want, I mean you can do that, but then you are playing your own game in your own private world like a wild potato underground as the B-52s used to say. Consciousness the thing most people refer to is nature is given it's something that precedes theory, and it is not medium independent unless you redefine it arbitrarily and create your own language. We are not at liberty to think of consciousness as independent of the medium, and by consciousness here I mean dissociated private conscious in the life of the type you and I have. We are not at liberty to separate that from its medium, because nature is telling us it seems to happen only in a certain medium, namely biology: warm, moist, carbon-based organisms that metabolize. But AI people conflate computation with consciousness, and they think they can give birth to a privately conscious being made of silicon computers. Freud used to talk of penis envy, which is the envy women have of men, because men have an extra part to their bodies. I like to call this phenomenon in the AI Community “womb envy” because this is the envy the men have of the capacity of women to give birth to privately conscious entities in nature. So, they try to make up for it, by conflating computation with consciousness and indulging in entirely arbitrary fantasy.

Now, let me try to drive home to you why I think this is pure fantasy. I can run a simulation of kidney function in my home computer. A simulation accurate to the molecular level. I can simulate how kidneys work accurately down to the molecular level on my computer at home. Does that mean that my computer will urinate on my desk? Of course not! Because a simulation is not the same as the thing simulated, and we all understand that if it comes to pee, or if it comes to anything else, but when it comes to consciousness, because it is such a discombobulating mystery under the arbitrary assumptions of materialism, we don't have that intuition, and we think that if a silicon computer simulates the patterns of information flow in the human brain, then the computer will be conscious. Now, I submit to you this is as absurd as to think that because I simulate kidney function on my computer, my computer will pee on my desk. It's as arbitrary and nonsensical a thought step as the the simulation of kidney function but people don't see that.
[yid]https://www.youtube.com/watch?v=5YYpS4FXmz8[/yid]
Favorite Philosopher: Alan Watts Location: Germany
#431384
Leontiskos wrote: December 19th, 2022, 5:24 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
But they don't just program the computers behavior. They are allowing for learning, which means that new behaviors and heuristics can be arrived about by the AI's. This may not entail that it is conscious or self-aware, but as far as its behavior and potential dangers AI presents, this does not matter. Just as it could adjust the electric grid, say, for out benefit, making it more efficient, it could 'decide' to have different goals, even as a non-conscious learning thingie. They can develop their own goals given that AI creators are modeling AIs on neural networks that are extremely flexible. Rather than the now primitive, If X, then do Y type programming.
#431398
Moreno wrote: December 20th, 2022, 7:29 am
Leontiskos wrote: December 19th, 2022, 5:24 pm
GrayArea wrote: December 18th, 2022, 8:32 am Hi all,

Humanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point. But will it be worth it? Some of us seem to believe that future sentient A.Is will "help humanity progress" and "help make our lives better", but why is that we expect A.Is to be more altruistic than human beings?
Artificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
But they don't just program the computers behavior. They are allowing for learning, which means that new behaviors and heuristics can be arrived about by the AI's. This may not entail that it is conscious or self-aware, but as far as its behavior and potential dangers AI presents, this does not matter. Just as it could adjust the electric grid, say, for out benefit, making it more efficient, it could 'decide' to have different goals, even as a non-conscious learning thingie. They can develop their own goals given that AI creators are modeling AIs on neural networks that are extremely flexible. Rather than the now primitive, If X, then do Y type programming.
I am a computer scientist by trade. You should be putting many more words than "decision" in scare quotes. You should also be using scare quotes with words like "learning", "develop their own goals," etc. All of the "learning" and "decisions" are predetermined by the code. It doesn't matter that the code generates second-order behavior; it is still deterministically derived from the code and completely different from true intelligence. Like any program, any inputs that the AI receives should be anticipated and accounted for by the programmer. That a programmer does not fully understand the code he writes does not mean that his program is sentient.
Everything I said in my first post holds.
Favorite Philosopher: Aristotle and Aquinas
  • 1
  • 2
  • 3
  • 4
  • 5
  • 8

Current Philosophy Book of the Month

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

2025 Philosophy Books of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

They Love You Until You Start Thinking For Yourself

They Love You Until You Start Thinking For Yourself
by Monica Omorodion Swaida
February 2025

2024 Philosophy Books of the Month

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


It seems to me that bullying specifically occurs i[…]

To reduce confusion and make the discussion more r[…]

Feelings only happen in someone's body, n[…]

Materialism Vs Idealism

Idealism and phenomenology are entirely artificial[…]