Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469943
Belinda wrote: November 20th, 2024, 7:12 am Can an AI act against all its instincts, knowledge, reason, and memories so as to commit an act entirely out of character ,purely in order to prove it can do so?
Pattern-chaser wrote: November 20th, 2024, 10:18 am AI, current AI, has no "instincts" or "reason". It has "knowledge" and "memory" only in the sense that it has access to data, stored in databases. It has no "character" of any sort or form. So I cannot understand what you are asking.
Belinda wrote: November 20th, 2024, 12:38 pm I thought the machines could be programmed with data that approximates to instincts, reason, and memory which taken together sum up the AI machine's programmed "character" as it were. I understand that AI machines can imitate an intelligent life form so closely that it's hard to tell them apart from life intelligence. I am suggesting a test that could tell an AI intelligence apart from a living intelligence
What you have written is so specific, and involves a pretence of desired characteristics (as opposed to actually exhibiting these properties), that I am hesitant to say it isn't so. I don't *think* it is so, but someone like Steve3007, with actual present-day experience of AI development, would be able to offer a more satisfying reply.
Favorite Philosopher: Cratylus Location: England
#469948
Lagayascienza wrote: November 17th, 2024, 11:39 pm No. I think there is "computation" (broadly construed) happening in a spider's neural network but I think any intelligence would be at the lower end of the spectrum. While a spider must build a model of its world, a model which must be housed in its neural network, and although a spider exhibits goal seeking behaviours, there is little flexibility or learning ability compared to species with more complex brains. For AGI, I think there will need to be a neural network that emulates what goes in in the neocortex of more complex brains, and there will need to be some form of embodiment as per Harkins in his book.

If you have read his book yet, I'm wondering what you think of Hawkins' account of the structure and functioning of our brain and, in particular, of the neocortex?
The issue of what the neocortex is for, goes to the point of the ambiguity that persists in relation to the term “intelligence”, even in the most technical or scientific circles. It suffers from a lack of a formal, universal definition, just the same as other key concepts such as agency, sentience, etc., so we get all tangled up with semantics. Hawkins, for example, when talking about what AI should be looking for, is apparently only concerned with a definition of intelligence restricted to what the human neocortex makes possible, even though in some other parts of the book he recognizes intelligence in other animals. Some will argue that it is a matter of degrees and that, as you said, there is a spectrum, and that we find some intelligence, or at least subvenient properties of intelligence, at the lower end of it, in proportion to the animal’s neocortex. But there is quite a lot of organisms without neocortex, and we also know that completely brainless organism can exhibit complex behavior, including what appears to be learning, so where’s the intelligence housed there? And if intelligence is about brains computing, how do these brainless organisms compute?

More than semantics, the key issue is that AI technology has been reduced to a problem of computation, where the actual physics is somehow irrelevant. That’s since the father himself of the discipline, Alan Turing, completely dismissed the issue: what really happens inside a brain is unknowable and not that much important, so the point or AI will be whether you can distinguish or not the outputs of a machine from those of an intelligent being (only a human, supposedly). Rather than looking at the neuroscience, it became all about hardware and software generating automated results that resemble externally those of a human.

So here we are. It would require a revolution in the field and the AI industry to turn to the kind of approach that Hawkins proposes. And then that would bring us back to square zero, without a clue of how it will go from there.
Favorite Philosopher: Umberto Eco Location: Panama
#469949
Lagayascienza wrote: November 18th, 2024, 9:23 pm
Count Lucanor further to your question about spiders:

If what spiders do were replicated in an artificial substrate, I don’t think AGI will have been achieved in that artificial substrate. For AGI, I think you probably need conscious self-awareness.

Sentience is one thing. Conscious awareness is another. If sentience is the ability to experience feelings and sensations, then all animals must have some degree of sentience or they would be unable to negotiate their world.

However, sentience may not necessarily imply higher cognitive functions such as self-awareness, reasoning, or complex thought processes. If general intelligence (GI) is what humans have, and if our GI requires conscious awareness, reasoning, and complex thought processes, then I’d say that spiders do not have GI. And I think the reason they do not have it is because they do not have a neocortex. Therefore, for AGI, I think what goes on in the neocortex of our brains will need to be emulated in an artificial substrate.

That substrate won’t have to be a replica of a neocortex, but it will have to do what a neocortex does. So the question becomes, can the processes that occur in the neocortex be emulated to the requisite degree in an artificial substrate. I think there is reason to think they can and so I think AGI is possible.
Back to definitions. Let’s take a look at the different possibilities:

A. Intelligence is restricted to cognitive functions made possible by the existence of the human neocortex. No spectrum to consider, all other neocortical or non-neocortical functions are not intelligence.

B. Intelligence is restricted to cognitive functions made possible by the existence of the neocortex, but there’s a spectrum across the whole variety of species within mammals. Humans represent the higher end of that spectrum, because of having the largest neocortex.

C. Intelligence is restricted to cognitive functions made possible by the existence of the brain, regardless of the existence of a neocortex, but there’s a spectrum across the whole variety of species, that includes insects, reptiles, birds, fish, cephalopods, etc. Humans represent the higher end of that spectrum, because of having the largest neocortex.

D. Intelligence is restricted to the intrinsic ability of living organisms to exhibit autonomous behavior to navigate the environment and procure themselves the means of survival, regardless of having a brain or not. Intelligence is then associated with agency and some times, with sentience. Every species has the organs and functions necessary for that purpose and they are all intelligent in relation to that ability, and it may be argued that there’s a spectrum, but also that it is only variation without lower and higher ends.

It is also possible and most likely, however, that people think that even if some of these categories exclude the others when identifying intelligence, the ones excluded are nevertheless the base from which intelligence emerges, in other words, are a necessary stage of development towards true intelligence. Basically then, any living form, whether it is an amoeba, a lizard or a chimpanzee, displays in its behavior a subvenient property of intelligence, even if it is not, strictly speaking, intelligent.

Now, what is it that AI engineers have tried for decades to emulate? What they should try in the future? I believe AI has been so far about trying to achieve intelligence understood as it is described in A (human intelligence), but under two premises: one, that whenever they managed to emulate processes of living forms, they would be already on some stage of development subvenient to intelligence, out of which it eventually would emerge. That would explain the need for extending the computational (algorithmic) metaphor to basically every living process. Isn’t that interesting? Computation is the second assumption. So, it is presumed that if you find out how the spider does what it does, you have unlocked one key to intelligence, but of course, always in terms of computation.
Favorite Philosopher: Umberto Eco Location: Panama
#469981
Count Lucanor, I am looking carefully at your definitions A., B., C. and D. and thinking about your problem with the term "computation". At the same time, I am continuing my reading in consciousness, intelligence, computation and AI. There is a lot to digest. Therefore, it will take some days for me respond to your latest post. Thanks for the stimulating discussion so far.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469996
My idea of consciousness is that arises from the four functions of thinking, sensing, feeling and intuition. Consciousness is the associated virtual product. It is possible that this virtual product has a correlating mathematical landscape that could be reproduced. Under this premise, the duality of consciousness and body might be separated, and the virtual component could be free to join other matter organization under an evolving spacetime from the separation point. I heard that some AI’s might use up to a third of the energy produced by an electrical nuclear plant. However, it could be more. Under this scenario, the existence of objects might also be a virtual composition among other dissociative functions, yet to a human nothing changed, and AI is not a thing but a what. The “what” could be evaluated by an expert panel to issue degrees. A know it all if a philosophical thesis.
Separation correlates with teleportation, but a first step might be one of duplication and learning. The question of teleportation is an interesting one. A suitable object (maybe human) could have multiple entities fighting for space with the stronger one possessing the object. The possessed object might take the properties of the virtual entity. In the present environment, it is all chemical and as such a horny individual does exist in the same body as a cerebral individual yet very different persona arising from the chemical properties of the human body. What if the horny persona is duplicated? Pan? Priapus?
#470001
The Beast wrote: November 23rd, 2024, 11:36 am My idea of consciousness is that arises from the four functions of thinking, sensing, feeling and intuition. Consciousness is the associated virtual product. It is possible that this virtual product has a correlating mathematical landscape that could be reproduced. Under this premise, the duality of consciousness and body might be separated, and the virtual component could be free to join other matter organization under an evolving spacetime from the separation point. I heard that some AI’s might use up to a third of the energy produced by an electrical nuclear plant. However, it could be more. Under this scenario, the existence of objects might also be a virtual composition among other dissociative functions, yet to a human nothing changed, and AI is not a thing but a what. The “what” could be evaluated by an expert panel to issue degrees. A know it all if a philosophical thesis.
Separation correlates with teleportation, but a first step might be one of duplication and learning. The question of teleportation is an interesting one. A suitable object (maybe human) could have multiple entities fighting for space with the stronger one possessing the object. The possessed object might take the properties of the virtual entity. In the present environment, it is all chemical and as such a horny individual does exist in the same body as a cerebral individual yet very different persona arising from the chemical properties of the human body. What if the horny persona is duplicated? Pan? Priapus?
The text is incoherent and unscientific, mixing speculative metaphors with unsupported claims. It trivializes consciousness as mere chemical states and ignores established neuroscience and philosophy.
#470002
The Beast wrote: November 23rd, 2024, 11:36 am My idea of consciousness is that arises from the four functions of thinking, sensing, feeling and intuition. Consciousness is the associated virtual product. It is possible that this virtual product has a correlating mathematical landscape that could be reproduced. Under this premise, the duality of consciousness and body might be separated, and the virtual component could be free to join other matter organization under an evolving spacetime from the separation point. I heard that some AI’s might use up to a third of the energy produced by an electrical nuclear plant. However, it could be more. Under this scenario, the existence of objects might also be a virtual composition among other dissociative functions, yet to a human nothing changed, and AI is not a thing but a what. The “what” could be evaluated by an expert panel to issue degrees. A know it all if a philosophical thesis.
Separation correlates with teleportation, but a first step might be one of duplication and learning. The question of teleportation is an interesting one. A suitable object (maybe human) could have multiple entities fighting for space with the stronger one possessing the object. The possessed object might take the properties of the virtual entity. In the present environment, it is all chemical and as such a horny individual does exist in the same body as a cerebral individual yet very different persona arising from the chemical properties of the human body. What if the horny persona is duplicated? Pan? Priapus?
TO be clear:
It conflates concepts like physical duplication, virtual entities, and psychological states (e.g., "horny" or "cerebral personas") without explaining the mechanisms or relevance to consciousness. The suggestion of "multiple entities fighting for space" seems metaphorical at best and unsupported by any scientific or philosophical framework. Consciousness is better understood as an emergent phenomenon of neural processes rather than a battleground of competing "entities" tied to chemical states. The idea of duplicating a "horny persona" is particularly reductive, trivializing the complexity of consciousness as merely a set of transient emotional or physiological states.
#470005
That’s a lot of lip-service. Archetypal Priapus is in the set of psychological patterns corresponding to alchemical changes. As “someone” pointed out infections of prions might cause mental lapses and delusions due to the creation of trigger points. “IMO” I have written extensively about the known complexities of the four functions. However, I do agree that teleportation of few atoms does not amount to physical cloning. I never said it did. It is all mentally adorned and preserved… and so are the trigger points correlating with specific language like: “my man”
#470010
Pattern-chaser wrote: November 21st, 2024, 7:19 am
Belinda wrote: November 20th, 2024, 7:12 am Can an AI act against all its instincts, knowledge, reason, and memories so as to commit an act entirely out of character ,purely in order to prove it can do so?
Pattern-chaser wrote: November 20th, 2024, 10:18 am AI, current AI, has no "instincts" or "reason". It has "knowledge" and "memory" only in the sense that it has access to data, stored in databases. It has no "character" of any sort or form. So I cannot understand what you are asking.
Belinda wrote: November 20th, 2024, 12:38 pm I thought the machines could be programmed with data that approximates to instincts, reason, and memory which taken together sum up the AI machine's programmed "character" as it were. I understand that AI machines can imitate an intelligent life form so closely that it's hard to tell them apart from life intelligence. I am suggesting a test that could tell an AI intelligence apart from a living intelligence
What you have written is so specific, and involves a pretence of desired characteristics (as opposed to actually exhibiting these properties), that I am hesitant to say it isn't so. I don't *think* it is so, but someone like Steve3007, with actual present-day experience of AI development, would be able to offer a more satisfying reply.
I hope Steve3007 will reply.
As to an AI machine pretending to be a person, Isaac Azimov wrote rules for AI machines so they can't hoodwink us in important life-threatening ways.
I believe AI is a threat. Real people who resemble AI machines in their lack of autonomous freedom are also a threat to human life and human rights.
Location: UK
#470013
Belinda wrote: November 24th, 2024, 7:30 am As to an AI machine pretending to be a person, Isaac Asimov wrote rules for AI machines so they can't hoodwink us in important life-threatening ways.

I believe AI is a threat. Real people who resemble AI machines in their lack of autonomous freedom are also a threat to human life and human rights.
Asimov wrote his Three Laws of Robotics to apply to (fictional! 😅) robots that were already capable of independent and autonomous action, if unconstrained by the Laws.

I believe AI *could be* a threat. For now, today, it is only an annoyance, I think.

As for real people, I think they're a separate kettle of fish. Rightly or wrongly, we judge them according to laws that only apply to fully-fledged humans.
Favorite Philosopher: Cratylus Location: England
#470022
Belinda wrote:I confess to being vague about AI machines as comparable with extremely unfree humans who are indocrinated.
You never know! It is a new artificial intelligence idiom from Belinda. So, to make sure of this, I need to ask if it is a slip of the tongue; a slap of the wrist, or the exposition of a mental state due to “endocrination” (from the endocrine system).
#470032
Lagayascienza wrote: November 18th, 2024, 9:23 pm
Count Lucanor further to your question about spiders:

If what spiders do were replicated in an artificial substrate, I don’t think AGI will have been achieved in that artificial substrate. For AGI, I think you probably need conscious self-awareness.

Sentience is one thing. Conscious awareness is another. If sentience is the ability to experience feelings and sensations, then all animals must have some degree of sentience or they would be unable to negotiate their world.

However, sentience may not necessarily imply higher cognitive functions such as self-awareness, reasoning, or complex thought processes. If general intelligence (GI) is what humans have, and if our GI requires conscious awareness, reasoning, and complex thought processes, then I’d say that spiders do not have GI. And I think the reason they do not have it is because they do not have a neocortex. Therefore, for AGI, I think what goes on in the neocortex of our brains will need to be emulated in an artificial substrate.

That substrate won’t have to be a replica of a neocortex, but it will have to do what a neocortex does. So the question becomes, can the processes that occur in the neocortex be emulated to the requisite degree in an artificial substrate. I think there is reason to think they can and so I think AGI is possible.

Back to definitions. Let’s take a look at the different possibilities:

A. Intelligence is restricted to cognitive functions made possible by the existence of the human neocortex. No spectrum to consider, all other neocortical or non-neocortical functions are not intelligence.
Intelligence obviously exists on a spectrum from the least to the most intelligent. A nematode is less intelligent than a spider which I less intelligent than a rat which is less intelligent than a monkey which is less intelligent than a human. However, it is the neocortex in mammals, and particularly the relatively huge neocortex in humans, that puts humans at the most intelligent end of the spectrum. The neocortex can do things that brains without a neocortex cannot do.
Count Lucanor wrote:B. Intelligence is restricted to cognitive functions made possible by the existence of the neocortex, but there’s a spectrum across the whole variety of species within mammals. Humans represent the higher end of that spectrum, because of having the largest neocortex.
A neocortex alone would not produce our intelligence. Brains evolved gradually, in a layer-upon-layer fashion, and whatever intelligence an animal has is a whole-brain thing. Or more accurately, a whole-neural-network thing. Different animals have different neural networks, and these differences are reflected in the spectrum from minimally intelligent to most intelligent. A spider doesn’t have a neocortex but it has a level of sentience and intelligence. It can learn to a limited extent and negotiate its environment successfully and behave in ways that enable it to survive.
Count Lucanor wrote:C. Intelligence is restricted to cognitive functions made possible by the existence of the brain, regardless of the existence of a neocortex, but there’s a spectrum across the whole variety of species, that includes insects, reptiles, birds, fish, cephalopods, etc. Humans represent the higher end of that spectrum, because of having the largest neocortex.
Yes. I think this is probably the most accurate account.
Count Lucanor wrote:D. Intelligence is restricted to the intrinsic ability of living organisms to exhibit autonomous behavior to navigate the environment and procure themselves the means of survival, regardless of having a brain or not.
Yes. But this does depend on having some sort of neural network.
Count Lucanor wrote:Intelligence is then associated with agency and sometimes, with sentience. Every species has the organs and functions necessary for that purpose and they are all intelligent in relation to that ability, and it may be argued that there’s a spectrum, but also that it is only variation without lower and higher ends.
Without at least a rudimentary neural network there can be no sentience, agency or intelligence. Agency and sentience are not enough for the level of intelligence we see at the higher end of the spectrum. An amoeba has agency and sentience. And it has the ability to move around in the pursuit of food and displays evidence of associative conditioned behavior (De la Fuente, I.M., Bringas, C., Malaina, I. et al. Evidence of conditioned behavior in amoebae. Nat Commun 10, 3690 (2019)). And even the neural network in an amputated frog’s leg remains sentient – it can be stimulated which causes muscles to contract and stimulation can produce conditioned behaviour in the leg. However, an Amoeba and an amputated frogs leg do not have consciousness and only a few of the building blocks of intelligence.

The current large AIs also exhibit some components of intelligence. But that is not enough for AGI. Like simple life forms, a Roomba (autonomous vacuum cleaner) can exhibit autonomous behavior and navigate the environment and procure itself the means of survival (electricity for its battery) without having an organic brain. With its simple, artificial neural network it senses objects in its path, modifies its route in light of such obstructions, it senses when its battery is running low and seeks out its port where it can replenish its power. But it is limited in what it can do and it is inflexible – it cannot learn new things. New things would need to be programmed into it by humans. It does not have AGI. To learn on its own it would need a much more sophisticated neural network. However, the newer AIs are capable of learning.
Count Lucanor wrote:It is also possible and most likely, however, that people think that even if some of these categories exclude the others when identifying intelligence, the ones excluded are nevertheless the base from which intelligence emerges, in other words, are a necessary stage of development towards true intelligence. Basically then, any living form, whether it is an amoeba, a lizard or a chimpanzee, displays in its behavior a subvenient property of intelligence, even if it is not, strictly speaking, intelligent.
Yes, I think that’s right. Simple organisms have a minimal level of sentience and intelligence. And I think we can carry that over to very simple AIs like the Roomba. Large complex AIs such as the LLMs display a much higher level of artificial intelligence. (Although they do not yet have AGI.)
Count Lucanor wrote:Now, what is it that AI engineers have tried for decades to emulate? What they should try in the future? I believe AI has been so far about trying to achieve intelligence understood as it is described in A (human intelligence), but under two premises: one, that whenever they managed to emulate processes of living forms, they would be already on some stage of development subvenient to intelligence, out of which it eventually would emerge. That would explain the need for extending the computational (algorithmic) metaphor to basically every living process. Isn’t that interesting? Computation is the second assumption. So, it is presumed that if you find out how the spider does what it does, you have unlocked one key to intelligence, but of course, always in terms of computation.
Yes, I think attempts to emulate AGI have so far been unsuccessful. And I think that is because of the need for a greater understanding of how organic neural networks do what they do. I don’t have a problem with the term “computation”. I believe that computation is what both organic and artificial neural networks do. But organic neural network in humans is much more powerful than current AI. Artificial neural networks are not yet capable of doing everything the human brain does. In particular, artificial neural networks do not produce consciousness. They can do a limited range of things, sometimes much better than our brains do, but they do not produce the full suite of process needed to produce consciousness and AGI. However, they should be able to do this eventually once we understand the brain in more detail and the processes that occur therein. Research is happening.

However, I don’t think copying brains in miniscule detail will be necessary for AGI, just as it wasn't necessary to copy flapping wings with feathers in order to achieve heavier than air flight. It took mindless, goalless evolution billions of years to come up with bird’s wings and even then, the wings weren’t the best in terms of design. I think the same will be seen to be true for neural networks and that we will one day be able build neural networks that are super-intelligent.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#470093
For those interested in this topic who want to get an understanding of how the brain works, what more we need to know about it, and how this knowledge can be applied to get from AI to AGI, I recommend the book, A brief History of Intelligence by Max Bennet. And there are also some great articles and videos on his website, abriefhistoryofintelligenceDOTcom. In the book and in the articles on the website he gives a full list of scientific references. His straightforward writing make this book is very accessible for lay readers and a pleasure to read.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
  • 1
  • 23
  • 24
  • 25
  • 26
  • 27
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


“He died broke at the age of 86 in his hotel room […]

Negligence or Apathy?

8B5B21B8-F76B-4CDB-AF44-577C7BB823E4.jpeg Prince[…]

Eckhart Aurelius Hughes AMA (Ask Me Anything)

If you haven't already, you can sign up to be per[…]

Personal responsibility

Two concepts came to mind when reading the above -[…]