Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469279
Sy Borg wrote: October 26th, 2024, 6:52 pm Emergence happens over time, and I think it will again when it comes to AI.
Pattern-chaser wrote: October 27th, 2024, 7:23 am Emergence can only occur, I think, when the relevant skill, or the potential for it, already exists. Do AIs have the potential for intelligence or understanding? Not today, no. Tomorrow remains to be seen.
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Favorite Philosopher: Cratylus Location: England
#469291
Pattern-chaser wrote: October 28th, 2024, 11:38 am
Sy Borg wrote: October 26th, 2024, 6:52 pm Emergence happens over time, and I think it will again when it comes to AI.
Pattern-chaser wrote: October 27th, 2024, 7:23 am Emergence can only occur, I think, when the relevant skill, or the potential for it, already exists. Do AIs have the potential for intelligence or understanding? Not today, no. Tomorrow remains to be seen.
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Pay attention! You are a programmer and speaking as if you are unaware of the approaching singularity - the point where AI designs better AI than humans can.

I will assume you had a brain glitch and that you are well aware that AI will reach a level where the AI it builds will be more of sophisticated than AI that humans can build. Once that happens, exponential development would seem possible.

Are you confident that you can predict the outcome of that exponential process? Autonomous self-improving AI seems likely to start a whole new round of evolution. Rocks and chemistry evolved to the point that made biology possible, and biology has evolved to a point that has made whatever AI will become possible.
#469298
Count Lucanor Nowhere did I say that AI is currently intelligent, conscious or capable of feeling anything. I have said that current AI exhibits some of the processes and behaviours commonly associated with intelligence. I said further that there is no reason to think that building AIs housed in sensate bodies and capable of intelligence and consciousness are, in principle, impossible.

Sculptor 1, you did mention simulation. You said
Sculptor 1 wrote:The evolution which led to more sophisicated forms of intelligence is driven and moderated by feelings. Feelings cannot be simulated in a machine.
You brought up "simulated" feelings. Not me.

I get the impression that some people are just opposed to the very idea of artificial intelligence. They just flatly state that it is impossible. I think this is incorrect. And I think that the computational theory of mind is more likely to be true than other proposals. If I am right, then all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. In which case, further progress towards AI that is truly intelligent can be made. If it is untrue that intelligence and mind have a physiological basis, then they will be forever mysterious. However, we already know a lot about the physiological basis of intelligence and mind and so I do not think the mysterians are right.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469301
Lagayascienza wrote: October 28th, 2024, 10:49 pmI get the impression that some people are just opposed to the very idea of artificial intelligence. They just flatly state that it is impossible. I think this is incorrect.
Yes, it seems that some want to make a point against AI boosterism. Trouble is, those who are interested and curious seem to be wrongly assumed to be following technocratic sci-fi wet dreams.

As you know, and relate to, I'm just interested in the story - starting from simple basalt and obsidian in the early volcanic Earth evolving to today. There have been a few pivotal events - the first oceans, abiogenesis, multicellularity, sentience, humanity and now, it seems, AI ... or whatever may emerge from AI.
#469305
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
Pattern-chaser wrote: October 28th, 2024, 11:38 am I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Sy Borg wrote: October 28th, 2024, 3:11 pm Pay attention! You are a programmer and speaking as if you are unaware of the approaching singularity - the point where AI designs better AI than humans can.
And you are not a software designer, so I understand your ignorance. AI cannot currently design anything. It can be used as a design tool, just as a compiler can, but that is a figurative light-year away from AIs themselves doing the 'designing'.

But you continue to ignore the point I have made many times about AI:
Sy Borg wrote: October 28th, 2024, 3:11 pm I will assume you had a brain glitch and that you are well aware that AI will reach a level where the AI it builds will be more of sophisticated than AI that humans can build. Once that happens, exponential development would seem possible.

Are you confident that you can predict the outcome of that exponential process? Autonomous self-improving AI seems likely to start a whole new round of evolution. Rocks and chemistry evolved to the point that made biology possible, and biology has evolved to a point that has made whatever AI will become possible.
AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
Favorite Philosopher: Cratylus Location: England
#469306
Sy Borg wrote: October 28th, 2024, 3:38 pm AI is obviously intelligent - you ask it questions and it answers appropriately.
AI is obviously not intelligent - you ask it questions, and it answers as it has been programmed to. It is impossible for it to do otherwise.
Favorite Philosopher: Cratylus Location: England
#469308
It's interesting to skim through the replies in this topic. A common theme seems to be the difference between the state of affairs right now and the likely state in the future, given current trends. Perhaps it is because of the present-tense wording of the original question that some people seem to take the view that no, AI is not intelligent now and that it's impossible to say what will happen in the future. That seems to me an odd attitude. Of course it's impossible to know with certainty what will happen in the future, but it's always possible to make predictions based on currently existing trends. We couldn't live without the ability to do that.

Of course, my view of what will happen with AI in the future depends on the continued existence of human life with the continued development of the relevant technologies towards ever greater complexity. If that stops then clearly nothing happens. But if it doesn't stop I think the development of manufactured objects with genuine intelligence, emotions, feelings, consciousness, sentience, etc is highly probable. My views there appear to be similar to those of Sy Borg and Lagayascienza.

I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.

Also, unless you believe that the human brain is literally infinitely complex (a very non-materialistic view to hold, since infinity is an abstract concept) then I think, if you're rational, you must take the view that a manufactured object could be equally complex at a finite amount of time in the future. The human brain is, for sure, extremely complex. But if it has a large but finite amount of complexity, and if manufactured objects can be made to increase in complexity with time, then I see no logical way to deny that such objects could be as complex as human brains a finite time into the future.


Incidentally, for the past year I've been doing a masters degree in AI. That's one reason why I didn't post here for a while, as I was quite busy juggling that with my job. A lot of what you learn on the course is a more rigorous and in-depth look at aspects of AI that most interested casual readers are already aware of. I think most people who are interested are already aware of the general principles on which artificial neural networks operate. So I don't think the course necessarily gives the student much greater philosophical insight into the subject. But it is fun to play with different ANN architectures. If you're willing to learn a bit of the Python programming language, you can create a Google Colab account and start using the AI library called Keras to start designing neural networks pretty quickly. I recommend giving it a try!
#469310
Pattern-chaser wrote:AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
A computer program that can modify its own code is a pretty trivial thing to write and I'm sure it's been done many times before. I don't see anything about self-modifying code per se that makes it revolutionary or dangerous.

On the subject of self-modification in AI: I'd say that modifying the weights in the neurons is a similar idea, and of course neural networks modify their own neurons in order to learn. You might say that so long as the code which describes the design of the neurons themselves is not self-modifying then the NN can't do anything genuinely creative, or something like that. But to me that's like saying that so long as a human being can't modify the operation of the laws of physics which describe the way our bodies and brains work, we can't do anything creative. I'd disagree.
#469312
Steve3007 wrote: October 29th, 2024, 9:11 am Incidentally, for the past year I've been doing a masters degree in AI.
That's handy for us, then! 😃 I have 40 years of experience designing software, and a professional awareness of AI, and the progress of AI software. But nothing specific or detailed.

What do you think about allowing AI to modify its own programming? Do you think that would be wise?

Ooo, it seems you've replied while I was writing this post:
Pattern-chaser wrote:AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
Steve3007 wrote: October 29th, 2024, 10:08 am A computer program that can modify its own code is a pretty trivial thing to write and I'm sure it's been done many times before. I don't see anything about self-modifying code per se that makes it revolutionary or dangerous.
Agreed. But the programs you are speaking of do not have the possibility to achieve what AI might, in the future. They are simple and contained programs, they have to be. Otherwise, the consequences of their existence could be as serious as that of autonomous AI. Self-modifying code is not testable. Not all possibilites can be tested, and their operation confirmed, because there are too many possibilities to test. So we would have to release some potentially world-rocking code without a clue as to what might happen.

If we were unlucky, an undiscovered bug might upset the apple-cart. And that is nothing (directly) to do with AI or self-modifying code.


Steve3007 wrote: October 29th, 2024, 10:08 am On the subject of self-modification in AI: I'd say that modifying the weights in the neurons is a similar idea, and of course neural networks modify their own neurons in order to learn. You might say that so long as the code which describes the design of the neurons themselves is not self-modifying then the NN can't do anything genuinely creative, or something like that. But to me that's like saying that so long as a human being can't modify the operation of the laws of physics which describe the way our bodies and brains work, we can't do anything creative. I'd disagree.
Neural networks do approach some sort of autonomy, I think. As you say, they can 'learn', and modfy their "neurons" accordingly. If that kind of autonomy was programmed into AI, with connections to the internet, and (e.g.) power distribution infrastructure, and so on, then the possilibilites are .... endless. And not all of those possibilities benefit humanity. Autonomous AI is no longer under human control. This opens the way for a sci-fi horror, "We've built a monster!!!" 😱😭

It may not turn out that way, of course. But we released dingoes into Australia's ecosystem, and we exploded nuclear fission bombs without a clue as to the consequences of releasing all that radiation, and the deadly-poisonous radioactive by-products, into our environment. Our history gives good reason to be nervous, and cautious too, I think.
Favorite Philosopher: Cratylus Location: England
#469314
Pattern-chaser wrote:That's handy for us, then! 😃 I have 40 years of experience designing software, and a professional awareness of AI, and the progress of AI software. But nothing specific or detailed.
Maybe a bit handy, but perhaps not quite as handy as you might think. As I said, one of the things I've taken away from studying the subject formally is that it doesn't necessarily give you much more insight into the philosophical issues around the subject than you'd get from reading about it informally, if you already have some programming background.

I learnt a bit about the electro-chemical workings of biological neurons, then learnt about the way that artificial neurons are designed, how they're put together in networks, numerous different kinds of networks and different uses of AI. Learnt how to create a simple neural network from scratch, then how to use existing libraries (Keras, Tensorflow) to build more complex NNs, to do the various kinds of things that they're currently used for and the various aspects of real brain function that some of them seek to emulate. Chose a dissertation subject to research and got quite deeply into that. etc.

But despite all that, the deepest philosophical question ("Is it possible in principle to manufacture a conscious entity?") is not something that you cover much in an AI MSc course. At least not this one. There was a "philosophy of AI" module on offer, and I would have chosen it, but they withdrew it. Possibly because the university (of Kent) decided to shut down their philosophy department this year! (Don't get me started on that one!)
Agreed. But the programs you are speaking of do not have the possibility to achieve what AI might, in the future. They are simple and contained programs, they have to be. Otherwise, the consequences of their existence could be as serious as that of autonomous AI. Self-modifying code is not testable. Not all possibilites can be tested, and their operation confirmed, because there are too many possibilities to test. So we would have to release some potentially world-rocking code without a clue as to what might happen.
The existence of too many possibilities to practicably test is not a particular characteristic of software that can modify its own executable code. It applies to any software with extreme complexity. Complex deep neural networks (large networks with one or more hidden layers of neurons between the input and output layers) take in vast quantities of data and perform vast numbers of calculations on that data in order to propagate it forward through the network according to the weights on the inputs to the neurons and to propagate adjustments to those weights back through the network. They are "black boxes" - non-deterministic for all practical purposes - because of that complexity. Not because they modify their own executable code. And there is randomness at the heart of the whole system. All adjustments to weights, and decisions as to whether a given neuron should fire a signal to the next neuron, are probabilistic. Of course, those probabilistic calculations are based on the the generation of psuedo-random numbers. But, as I said in a previous post, there's no reason why those pseudo-random number algorithms couldn't be replaced by the output from some kind of quantum event - making them as genuinely random as anything in the universe can be said to be.

So I don't think it's self-modifying code that's the issue when it comes to trying to predict what these things will do. It's extreme complexity mixed with a big dose of randomness. That's the main takeaway from studying the subject - the vast complex arrays of data being processed, with huge quantities of computing power. Hence the sudden huge market for GPUs and the sudden 7-fold increase in the share price of NVidia.

Running large neural networks on Google Colab is interesting because it makes it easy to compare running on a CPU with running on various kinds of GPU. The speed increase is amazing. One of the neural networks I designed for my dissertation project took about 40 minutes to train when run on a CPU and something like 10 seconds on one of the GPUs. I guess it would have been difficult to predict many years ago that the mathematics of 3D graphics (matrix transformations), and the public's love of 3D games, would result in hardware that would benefit AI, because it relies on the same matrix/tensor mathematics. But I guess the history of scientific/technological advances is filled with these unforeseen crossovers.
#469320
Pattern-chaser wrote:So we would have to release some potentially world-rocking code without a clue as to what might happen.
Putting aside my quibbling with you about the importance or otherwise of self-modifying code, we could talk generally about software whose behaviour is, for all practical purposes, unpredictable, whether that's due to self-modification or extreme complexity mixed with randomness or whatever. And yes, that seems on the face of it like a disturbing thing. A large part of my day job (and I think used to be part of yours too) is trying to design software that is predictable, because it's a tool for doing a job, and we want tools to behave in the same way each time we use them in the same way. But when you're seeking to design something that emulates some aspects of the way creative beings like humans act, you don't necessarily want complete predictability. Human behaviour isn't entirely predictable. But it isn't entirely random and unpreditable either. It's complex.
#469331
Count Lucanor wrote: October 28th, 2024, 10:44 am
Lagayascienza wrote: October 26th, 2024, 1:23 am None of the above is to say that there are not important architectural and processing differences between biological computers and non-biological computers. For a good article and commentary about these differences see "10 Important Differences Between Brains and Computers" at Science Blogs.

There definitely are some important differences in size and complexity and processing but, as one commentator said, none of those differences prove that computers cannot eventually be built that could house sentience? We are certainly nowhere near being able to build computers with brain-like complexity housed in a sensate body which could do everything a human could do. But the difference in our current, comparatively simple, non-biological computers do not demonstrate that it is impossible to eventually construct sentient, intelligent computers.
The expression of a common fallacy: “if something has not been proven to be false, then there’s a hint that it is true”. OTOH, if something has not been proven to be true, then it has not been proven to be true. And if something has been proven to be false, then it is false. To my understanding, it has been proven that the statement “AI is intelligent” is false. Also, “the mind is a digital computer” is false.
I basically concur except to say that the the truth level of the last two statements is mitgated by a sort of convenience of usage.
1) Clearly AI uses the word "intelligent". So the idea that Artificial intelligence is not intelligent might be somewhat incongrueous until you actually think about what we mean by the term "intelligent", and
2) The idea that you can employ the analogy of a digital computer to help describe the workings of intelligence has it uses.
So in the same way energy balance, calorie intake and storage can employ the analogy of a fridge (glycogen)and Freezer(body fat), so too can we talk about Software/Hardware RAM and ROM as proxies for long tern and short term memory - eventhough the human system of consciousness has nothing of the kind.

In that all lanaguage is metaphor such devices are necessary though not sufficient to get our full understanding.
#469334
Pattern-chaser wrote: October 29th, 2024, 8:58 am
Sy Borg wrote: October 27th, 2024, 4:38 pm How does your claim about emergence stack up with the emergence of biology?
Pattern-chaser wrote: October 28th, 2024, 11:38 am I'm not quite sure what you're asking. I think emergence can only occur in something capable of change, yes? Current AI is incapable of changing. Humans can impose change upon them, but that's not the same. AIs are currently unable to evolve because they can't change. In the future, I suppose that may change...
Sy Borg wrote: October 28th, 2024, 3:11 pm Pay attention! You are a programmer and speaking as if you are unaware of the approaching singularity - the point where AI designs better AI than humans can.
And you are not a software designer, so I understand your ignorance. AI cannot currently design anything. It can be used as a design tool, just as a compiler can, but that is a figurative light-year away from AIs themselves doing the 'designing'.
Actually, I have coded (elementary level) in machine language, BASIC and Javascript, and I have also worked in UAT, trying to fix an absolute beast of a legal application, designed by lawyers, with all the unnecessary detail that that situation entails. So, I am not unfamiliar with the concepts, making your claim about my "ignorance" was both unwarranted and incorrect.

Further, your claim is wrong.
Pattern-chaser wrote: October 29th, 2024, 8:58 am But you continue to ignore the point I have made many times about AI:
Sy Borg wrote: October 28th, 2024, 3:11 pm I will assume you had a brain glitch and that you are well aware that AI will reach a level where the AI it builds will be more of sophisticated than AI that humans can build. Once that happens, exponential development would seem possible.

Are you confident that you can predict the outcome of that exponential process? Autonomous self-improving AI seems likely to start a whole new round of evolution. Rocks and chemistry evolved to the point that made biology possible, and biology has evolved to a point that has made whatever AI will become possible.
AI has not yet been designed to be able to "self-improve". It has not yet been programmed in such a way that it can update its own executable code. If and when AI is given that ability, all bets are off. But it will be a very brave, stupid, and reckless, designer who gives AI that ability. For then they are truly beyond human control. Once Pandora's box is open, it cannot then be closed again...
If we are to do any serious work in space, autonomous self-improving robots will be essential. as Steve said, work is already being done to that end:
In recent developments that are nothing short of groundbreaking, Google DeepMind has unveiled a revolutionary advancement known as "Promptbreeder (PB): Self-referential Self-Improvement through Accelerated Evolution." This innovation represents a significant leap in the world of Artificial Intelligence (AI), as it enables AI models to evolve and improve themselves at a pace billions of times faster than human evolution.
https://newo.ai/the-evolution-of-self-i ... -learning/
#469335
Steve3007 wrote: October 29th, 2024, 9:11 am
I think the only way to consistently hold the view that those features could never exist in a manufactured object is to be some form of philosophical dualist. Or at least, to not be a philosophical materialist. As far as I can see, that is the only way to rationally hold the view that there is something in the structure of things like human brains that is forever beyond the reach of manufactured structures. You'd have to believe in the existence of some kind of non-material spirit or soul or whatever and you'd have to decree that this spirit/soul stuff cannot ever exist in manufactured structures but can only exist in naturally evolved biological structures. Possibly you might believe, as many do, that it only exists specifically in humans.
First, that’s a false dilemma. As once noted by Searle, this argument (considering its context) implies that the question of whether the brain is a physical mechanism that determines mental states or not, is exactly the same question of whether the brain is a digital computer or not. But they are not the same question, so while the latter should be answered with a NO, the former should be answered with a YES. That means one can deny that computational theory solves the problem of intelligence, while at the same time keeping the door close to any dualism of the sort you’re talking about. Secondly, even though trying to emulate brain operation stays within the problem of emulating a physical system, human technical capabilities are not infinite, so we can’t predict it will happen. Now, if researchers committed to achieving that result were focused on that goal, even if they had to discard trending approaches that do not actually work, so as to try with other technologies, we could at least hope that they will achieve it some day, but the fact is that they’re only trying the path set by Turin and others, that is, the path of the computational theory of mind. That path is a dead end, it doesn’t take us to where is promised.
Favorite Philosopher: Umberto Eco Location: Panama
  • 1
  • 12
  • 13
  • 14
  • 15
  • 16
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Do justifiable crimes exist?

Crime contains intent but "Self-defense is a[…]

Emergence can't do that!!

I made the inference from the grain of wheat that […]

Sy Borg, With no offence to amorphos_ii, I am su[…]

The way in which your tactile nose is beyond the h[…]