Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469035
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am ... believe that most if not all talk about AI in this forum and mainstream media is ultimately nourished by the singularity hypothesis, which goes as follows:

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. (Wikipedia)
But, Count Lucanor, what if the singularity hypothesis is not the hypothesis that is argued for? I want to argue only that there is no reason to believe that sentience and intelligence can only be housed in biological organisms. The so called “singularity” might be possible – I’m unsure about that, but it is not what I argue for.
Surely there are many things being thrown around, but I think that ultimately they all relate to the core principles of AI founded by Turing, the ideas of Von Neumann and Good’s intelligent explosion. There’s a narrative, an ideology built around these ideas, which dominates the field of computer technology even if the participants are not fully aware of its origin or all its developments. Take for example the idea of “intelligent explosion” (reminiscent of the Cambrian explosion). It’s already loaded with the assumption of an emergent intelligent life breaking out on its own as a result of algorithms becoming more complex. Not only full Turing ideology behind the curtains, but the naturalization of human endeavors, so that they operate as independent, natural, spontaneous forces, The discussion obviously follows the path that you decide on the issue of whether intelligence is only housed in biological organisms or not.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 amMy charges against this hypothesis are:
1) What is called artificial intelligence rests on the assumption that minds are biological computers, so one should be able to recreate minds in highly sophisticated computers, but these assumptions are wrong. It's been a long debate since Turing, but I'm confident where I stand. There's a direct relationship between proponents of AI-as-real intelligence and the singularity hypothesis.
What prevents you from seeing brains as biological computers? You say that you are confident that brains are not biological computers. What gives you this confidence? Could you explain why you believe that brains cannot be made of inorganic materials?
Brains came first and computers much later, from people with brains. Living agents came first and technology later, from living agents. The correct mind setup is: if you’re going to argue that the brain is a computer and that agency is a type of technology, you have the burden of proof, and to make your case, you have to provide the theoretical models and the empirical evidence that supports it. Now, I know that such attempts are out there to discuss, but in the general public there’s seem to be an attitude of “let’s just believe what the tech lords say to us and then just reduce our arguments to: why not?”.

Anyway, the computational model has been also widely criticized. It reduces the mind to syntactical operations, the base of algorithms and programming language. It has been shown that such operations don’t carry semantic content, meaning, as this implies a sort of feeling of the world only found in organisms. Today’s most sophisticated software, such as GenAI and LLM have been shown not to have any of this. It’s no different than a pocket calculator not knowing anything about math.
Lagayscienza wrote: October 19th, 2024, 2:36 am
If it were possible that structures made from inorganic materials could house brain-like processes, what would prevent you from entertaining the idea that minds could emerge from these brain-like structures?
I’m skeptical and just playing cautious until some evidence arrives.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am2) Machines are lifeless, non-sentient. The assumption from proponents of AI-as-real-intelligence (also the singularity hypothesis) is that the more sophisticated the computers, the more "intelligent" they get, the closer to becoming life-emulating, sentient entities. The conceptual base is that life and sentience are emergent properties of intelligence. I say this is nonsense.
You say above that the conceptual base of AI proponents is that “life and sentience are emergent properties of intelligence”. But that is Idealism and not my assumption. Rather, I think sentence and intelligence have been emergent properties of life, but that it’s hard to see why sentience and intelligence must only be associated with the biological processes of organic life?
I’ll buy it that you’re not assuming idealistic positions, but can’t say about the rest. The pernicious influence of Idealism is all over the place. For now, sentience and intelligence ARE associated with the biological processes of organic life, if something changes, we would be able to see the evidence. As a purely theoretical model, it should leave the stage of philosophical, idealist-driven, speculation.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am3) Proponents of AI-as-real-intelligence (also the singularity hypothesis) believe that Generative AI and LLM (Large Language Models) are the holy grail of human-like computer intelligence, getting us closer to machines becoming life-emulating, sentient entities. Because this is not real intelligence, nor real life, nor real sentience, I say this is nonsense. It has been demonstrated that these models still can't think, reason, nor have interests, etc. They cannot have interests because they don't have any "feeling" apparatus.
I don’t believe the current crop of LLMs are sentient, or that they have interests. However, they certainly have abilities we associate with intelligence. These abilities, and our understanding of neural networks, seem to me like a humble start on the road to eventually building brain-like structures that perform similarly to organic brains.
However, the computational model of mind and intelligence is at its base. That “intelligence” is not real intelligence, just a simulation of intelligence, just the same way a parrot can emulate human speech.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am4) Technological growth is the result of human action and the nature of technology itself is instrumentality, it is a tool invented and used by humans. Its growth is not a "natural growth" outside of human society. It is very unlikely that spontaneously, without direct human intervention, the products of human technology become uncontrollable by ceasing to be instrumental, becoming agents on their own.
I agree that at present it is unlikely. But, down the road, is it impossible in prinicple?
With the current models, it is impossible by principle. By definition, technology is instrumental to humans. I have explained in detail in previous posts in this thread why the Singularity scenario is very unlikely, as it involves social action from a race of machines.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Count Lucanor wrote: October 18th, 2024, 11:28 am5) Even in the highly unlikely scenario that humans managed to create life-emulating intelligent machines, to be agents on their own, pursuing their own interests, it would imply that they are constituted as a new race or class of entities with the power of social action. If such sci-fi scenario was possible, indeed it would bring unforeseeable consequences for human civilization, as the singularity hypothesis predicts, but that new history would be entirely undetermined and contingent, just as human history is right now.
Right. Their future would be undetermined and contingent. But does that make it impossible? We inhabit a deterministic universe in which contingent processes such as evolution by natural selection unfold. Why should we think that such processes are only possible for organisms like us? Why, in deep time, could evolution of some form not play a part in the development of autonomous, self-replicating machines that we build and send out to explore and colonize the galaxy?
I would like to see some evidence, not just “why not” speculations, without reference to the current state of our knowledge and capabilities. Would it be useful to talk about the possibility of teletransportation? I don’t think so.
Lagayscienza wrote: October 19th, 2024, 2:36 am
Apologies for all the questions. It's just that I'm trying to better understand your position.
No problem whatsoever.
Favorite Philosopher: Umberto Eco Location: Panama
#469054
The key to this issue, as in all things (but is frequently taken for granted) is time.

Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.

However, in wars 80% of people who die are civilians, and generally those are not the top brass. Thus, progress in AI development will continue (and it usually accelerates in warfare due to competitive pressures). It would take a truly world-ending event, like a massive comet impact, to destroy all industrial activity and stop AI progress.

It's hardly a stretch to imagine human civilisations persisting for thousands of years - a blink of the eye in evolutionary time, and less distant than ancient Egypt. In that time, AI will become unrecognisable. I suspect that AI will be beyond our imaginations even in a century's time, given that AI can be used to create and train better AI. There's an exponential aspect to its development.

The nature of intelligence itself is going to change, and perhaps even the concept of sapience.
#469062
Pattern-chaser wrote: October 20th, 2024, 9:40 am So where do we go from here?
Gertie wrote: October 20th, 2024, 10:12 am Where we go - is try it and see what happens.
The possible effects on us of autonomous AI are significant. To just try it to see what happens is like releasing a deadly virus, to see if it will spread like the doom-sayers say it might. Or like sending GIs to roll around in the dust remaining at 'ground zero' after a nuclear explosion, to see what happened. Well we did it, and we saw. The GIs mostly died early (and avoidable) deaths from radiation-related disease.

I think caution and prudence are appropriate in such matters as this. AI could be the end of humanity. Not "will be" or "would be", but only "could be".


Gertie wrote: October 20th, 2024, 10:12 am Bearing in mind that if a computer can achieve agency, goals and perhaps a sense of wellbeing, that has both risks and welfare implications we ought to think through. And such considerations are not best left in the hands of tech corporations and billionaire owners.
With this I cannot disagree, especially not with your final sentence. 👍
Favorite Philosopher: Cratylus Location: England
#469064
Sy Borg wrote: October 21st, 2024, 5:25 am Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.
Are we all aware of how much power is being used, across the globe, to *power* AI, and the computers and servers that run it? An example:
If you were to fully turn Google’s search engine into something like ChatGPT, and everyone used it that way—so you would have nine billion chatbot interactions instead of nine billion regular searches per day—then the energy use of Google would spike. Google would need as much power as Ireland just to run its search engine.
Whatever else AI is, it is computer-intensive. It requires many fast computers and servers just to do what it does. And the resulting power consumption can make your eyes water...
Favorite Philosopher: Cratylus Location: England
#469078
Obviously. Applying neck of the woods statistics to the whole forest might allow strong deviations. There is an overall 2.1 world birth index which corresponds to an increase in population (besides the increase in life expectancy). Some areas have higher population increases. Ex: Nigeria (a rich country) is the most populated African nation and… growing. In 1960 was forty-five million and presently it is 230 million humans with the poor Muslim population growing (presently 53.5%). Estimates are that by 2050 the population will be 360 million. Nigeria’s GDP per capita was $1,621 (in the latest tab). Avg monthly income $190-$355. The most densely populated nation is Macau 62182 people per square mile, however less than a million people total with about 1.5% yearly increase. Birth rate around 1.08. Life expectancy 83. GDP $78962 per capita and income is $66835 per capita. Sydney, Australia has a population density of around 20000 per square mile growing around 2% yearly GDP per capita is $86500 and income per capita is $82000.
My subjective opinion from the empirical data is 1) that the people of Sydney are smarter than the people of Macau. 2) Some countries with powerful armies (Rusia) are looking to greener pastures. Iran population birth rate is avg (2.1), but GDP per capita is around $5500 which is below the $13138 world’s income per capita. However, 99.4% are Muslim in Iran (compared to 1920’s 21% Muslim) in a local subjacent (local products and currency) economy. Examples of this economic model are present in other (non-Muslim) countries like for example Cuba ($7,252 GDP per capita and not benefiting of world trade) and no relevant electrical grid (water the eyes). 3) Sy Borg is more wrong than right since there is nothing new in the human saga. AI is mostly (corporations) deployed in participant countries of world trade benefiting such citizenship with increase understanding. “Understanding” is the original translation (from the Greek intel) of intelligence. A new divide… ( half right?).
#469087
Pattern-chaser wrote: October 21st, 2024, 9:45 am
Sy Borg wrote: October 21st, 2024, 5:25 am Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.
Are we all aware of how much power is being used, across the globe, to *power* AI, and the computers and servers that run it? An example:
If you were to fully turn Google’s search engine into something like ChatGPT, and everyone used it that way—so you would have nine billion chatbot interactions instead of nine billion regular searches per day—then the energy use of Google would spike. Google would need as much power as Ireland just to run its search engine.
Whatever else AI is, it is computer-intensive. It requires many fast computers and servers just to do what it does. And the resulting power consumption can make your eyes water...
And the power consumption need is apt to increase geometrically, possibly to the point of making a significant impact on our planet's resources.
Signature Addition: "Ad hominem attacks will destroy a good forum."
#469090
Sy Borg wrote: October 21st, 2024, 5:25 am The key to this issue, as in all things (but is frequently taken for granted) is time.

Most here are dubious that AI will progress dramatically from what it currently is. They presumably figure that a world-ending catastrophe will occur first. Certainly, systems are being stretched to breaking point - be it record national deficits, overpopulation, low birth rates, extinctions, desertification and loss of arable land, loss of clean water, melting ice caps, increasing social divisions, social isolation, threat of pandemics, geopolitical tension between nuclear states, and even tension between genders. Or AI itself.

However, in wars 80% of people who die are civilians, and generally those are not the top brass. Thus, progress in AI development will continue (and it usually accelerates in warfare due to competitive pressures). It would take a truly world-ending event, like a massive comet impact, to destroy all industrial activity and stop AI progress.

It's hardly a stretch to imagine human civilisations persisting for thousands of years - a blink of the eye in evolutionary time, and less distant than ancient Egypt. In that time, AI will become unrecognisable. I suspect that AI will be beyond our imaginations even in a century's time, given that AI can be used to create and train better AI. There's an exponential aspect to its development.

The nature of intelligence itself is going to change, and perhaps even the concept of sapience.
I'd say this is how things are likely to play out. Yes, there will continue to be struggles, disaster and death, and as always, it will be the poor and powerless who do most of the suffering. But failing a complete trashing of the joint, humans are likely to survive and so will AI. If this is so, then, over deep time, the prospects for AI are probably unimaginable by us today. As another tool, AI promises a vast increase in our power. It could conceivably become an existential threat to us at some point, but I don't think that needs to so. We are certainly nowhere near that point at present. But are we playing with fire? Well, humans, in their curiosity and search for new tools, have always liked to play with fire. Playing with fire got us burnt from time to time, but it also resulted in humans in their modern form. There is no reason to think humans will stop playing with fire literally or metaphorically. And no reason to think that the development of AI will hit a wall.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469095
Lagayscienza wrote: October 21st, 2024, 10:12 pm But are we playing with fire? Well, humans, in their curiosity and search for new tools, have always liked to play with fire. Playing with fire got us burnt from time to time, but it also resulted in humans in their modern form. There is no reason to think humans will stop playing with fire literally or metaphorically. And no reason to think that the development of AI will hit a wall.
Yep, we're playing with fire, and AI doesn't need to be sentient to create many problems. However, let's say the US decided, "Nope. It's too dangerous. We're stopping". China would say "Yippee!" (or rather, Yí-Píng!) and then ram home their competitive advantage to become globally dominant with relative ease.

The dynamics are not wildly unlike the nuclear arms race - an ultra-high states game of chicken, a game of chance, possibly winner-takes-all.
#469108
All highly intelligent species behave appallingly. Less intelligent ones also behave appallingly, just that they don't know it. Life itself is brutal, the ouroboros etc etc. I used to want to fight against it all. Now I accept the reality. I'd rather accept the world, despite its flaws, than continue to love an ideal that cannot be realised and end up being disillusioned, and seeing that love turn to hate.

Whatever, I don't think AI will wipe humanity out, rather it will more likely help at least a percentage of people better survive.
#469110
Pattern-chaser wrote: October 21st, 2024, 9:45 am Whatever else AI is, it is computer-intensive. It requires many fast computers and servers just to do what it does. And the resulting power consumption can make your eyes water...
Mo_reese wrote: October 21st, 2024, 8:40 pm And the power consumption need is apt to increase geometrically, possibly to the point of making a significant impact on our planet's resources.
I have seen estimates that the internet, as a whole, consumes about 15% of all human energy use. AI is a rapidly-increasing part of that.
Favorite Philosopher: Cratylus Location: England
#469122
Sy Borg wrote: October 22nd, 2024, 7:26 am All highly intelligent species behave appallingly. Less intelligent ones also behave appallingly, just that they don't know it. Life itself is brutal, the ouroboros etc etc. I used to want to fight against it all. Now I accept the reality. I'd rather accept the world, despite its flaws, than continue to love an ideal that cannot be realised and end up being disillusioned, and seeing that love turn to hate.

Whatever, I don't think AI will wipe humanity out, rather it will more likely help at least a percentage of people better survive.
True. What cannot be changed must be accepted. People underestimate just how much of human nature and behavior is determined. Humans are part of nature, a natural phenomenon like the rest of life, an outgrowth of the planet. If we want to change the way we interact with each other and the rest of life, then we would need to change ourselves. We would need to change the very determinants of our behavior. We could conceivably change our brains in such a way that they did not produce too much competitive, violent, destructive behavior. And we could try to free ourselves from the ouroboros by becoming technological autotrophs. But none of this will be happening any time soon, if at all. Unless/until it does, the best we can do is try consciously to give more voice to the better angels of our nature, to our cooperative, caring side. That side, too, is part of nature which we must accept. Hopefully, we can foster it at the expense of our darker side. But progress in that regard has been, at best, halting and piecemeal.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469146
Lagayscienza wrote: October 22nd, 2024, 11:31 pm
People underestimate just how much of human nature and behavior is determined. Humans are part of nature, a natural phenomenon like the rest of life, an outgrowth of the planet. If we want to change the way we interact with each other and the rest of life, then we would need to change ourselves. We would need to change the very determinants of our behavior. We could conceivably change our brains in such a way that they did not produce too much competitive, violent, destructive behavior. And we could try to free ourselves from the ouroboros by becoming technological autotrophs. But none of this will be happening any time soon, if at all. Unless/until it does, the best we can do is try consciously to give more voice to the better angels of our nature, to our cooperative, caring side. That side, too, is part of nature which we must accept. Hopefully, we can foster it at the expense of our darker side. But progress in that regard has been, at best, halting and piecemeal.
Surely, absolutely everything is natural in the sense that nature is the universe, unless one thought that there is or there might be another supernatural reality beyond the universe as we know it, which I believe is not your case. However, is the International Space Station natural? Is it Musk’s Cybercab? Calling them “natural” doesn’t say much about them, as these things, among the many produced by human culture, are essentially different than any other product of any other species in nature, being precisely that the difference that allows us to use a term invented for the arte factum, for the things made by human ingenuity: “artificial”. The label is put to things such as intelligence, fibers, diamonds or rain, to distinguish them from those not produced by humans. Artificial intelligence, artificial fibers, artificial diamonds or artificial rain, they all point to man-made things, as opposed to natural intelligence, natural fibers, natural diamonds and plain old natural rain.
The existence of all these “artificial” things says already something about the determinants of human behavior. While our brains have not changed anatomically in any significant way for millennia, we have certainly changed ourselves from our hunter-gatherers ancestors, in other words, we have been able to use those determinants of behavior for transforming our environment and our own societies, so that, unlike the rest of nature, our destiny is not predetermined by the innate faculties, actually they allow us to do the opposite. There’s no need to wait for a new artifact to modify our brains (minds) and behavior, we have already being doing it since there was a human species: arrows, fire, agriculture, art, script, books, TV and, of course, computers. We have been making better and better computers, and with that we will continue modifying our minds and behavior, but how far can we go with that? Again, our destiny is not predetermined, so we don’t know. We do know, however, that we can make a very strong case for this thing called “artificial intelligence” not being real intelligence at all, in the sense that we expect a natural diamond sharing key properties with an artificial diamond, while still being called a diamond, so the current state of our technology, that which we use for modifying our minds and behavior, is not there yet, not in that field.
Favorite Philosopher: Umberto Eco Location: Panama
#469151
Yes, the ISS is natural. It is a structure created by the planet Earth. A cocoon, like the ISS, was created by a life form, in this case caterpillars. It could not be spontaneously produced by the Earth without the conduit of a caterpillar, likewise the ISS cannot be created by the earth without the conduit of humans.

AI is just one more thing that the Earth is doing.

People have an anthropocentric notion that humans and their creations are outside agents that exist on planet Earth, almost as if we are aliens (or divine), as opposed to being structures of the planet like mountains, streams, trees and cocoons.

Anthropocentrism is a practical, and the terms "artificial" and "natural" are ways of delineating that which is created by humans or not, but it is ontologically incorrect. AI is simply another expression of proto-intelligence by the planet.
#469152
I've been quite impressed with CHATGPT, though I has been shown to have some really bad simple errors lie counting the number of multiple letters in words..

But here is a question I asked today.

In the us electoral collage ., the number of electors bears an aproximate correlation to the population of the state. But given that some states are under represented whilst other over represeted please give a simple comparison such as 1 vote in the worst effected state is worth x times less than the most most advantages. and give the names of the two most extreme states

It ignored the spelling problem, and gave a comprehensive and correct answer.

The actual answer is at the end but it also showed its workings.

In the U.S. Electoral College system, the number of electors per state is roughly proportional to the population, but because each state automatically gets two electors for its Senate representation (in addition to electors based on the number of House seats, which are population-based), smaller states are somewhat overrepresented relative to larger ones.

Here’s a simplified comparison:

Wyoming: The smallest state by population, Wyoming has 3 electoral votes for about 580,000 people (as of 2020), meaning roughly 1 electoral vote per 193,000 people.
California: The largest state by population, California has 54 electoral votes for about 39 million people, meaning roughly 1 electoral vote per 720,000 people.
Comparison:
1 vote in Wyoming is worth about 3.73 times more than 1 vote in California.

Thus, Wyoming is the most overrepresented state, while California is the most underrepresented state in terms of the Electoral College.


I was impressed, but this is not intelligence, this is a specialised language model
  • 1
  • 8
  • 9
  • 10
  • 11
  • 12
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

Right. One does the socially expected thing and ap[…]

Q. What happens to a large country that stops ga[…]

Emergence can't do that!!

I'm woefully ignorant about the scientific techn[…]