Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
User avatar
By psycho
#440886
Sy Borg wrote: April 29th, 2023, 5:00 pm
Count Lucanor wrote: April 29th, 2023, 1:05 pm
Sy Borg wrote: April 29th, 2023, 3:00 am Still, Count, there is always the chance of emergence. For a long time, there was apparently no sentience in the biosphere.
I agree, but then the obvious question: was the emergence of life and sentience inevitable? For now, there's no sign of it anywhere else, so it's a good thing to be cautious and not rush into making predictions without firm evidence. The other question is: can all the results of natural emergence be reproduced artificially? Don't we humans have any limits in replicating nature?
TBH I'm not sure why we have sentience. It seems everything would be easier if we operated like philosophical zombies, like the heroes of action movies, who seem impervious to trauma and, often, fear. They just do what needs to be done without the fuss.

So why don't we real humans just generate quality output without the inefficiencies of emotionality, of suffering? Logically, the reason lies in our evolutionary history, with the emergence of brains. Yet a brain does not necessarily brings sentience. Larval tunicates have simple little brains that let them swim and find a rock to latch onto, after which they absorb the brain and lead a sessile, filter feeding lifestyle.

So, what is the evolutionary advantage of sentience, as compared with extremely efficient reflex responses of similar complexity?
In my opinion it was necessary for us to develop sentience if we abandoned our behavior to be reactive. Only reactive to stimuli.

Abandoning that our agency only depended on direct stimuli allows us to behave superflexibly and adapt to changes in circumstances immediately.

But without that motor for our agency, we would have no reason to act.

Being aware of our individuality allows us to set goals and possess goal-oriented agency.

The disadvantage is that each individual can structure their reality and social cohesion is lost.

Education, myths, religion, etc. they complete the belonging to groups and the orientation to group goals.
User avatar
By Count Lucanor
#441034
Sy Borg wrote: April 29th, 2023, 5:00 pm
Count Lucanor wrote: April 29th, 2023, 1:05 pm
Sy Borg wrote: April 29th, 2023, 3:00 am Still, Count, there is always the chance of emergence. For a long time, there was apparently no sentience in the biosphere.
I agree, but then the obvious question: was the emergence of life and sentience inevitable? For now, there's no sign of it anywhere else, so it's a good thing to be cautious and not rush into making predictions without firm evidence. The other question is: can all the results of natural emergence be reproduced artificially? Don't we humans have any limits in replicating nature?


TBH I'm not sure why we have sentience. It seems everything would be easier if we operated like philosophical zombies, like the heroes of action movies, who seem impervious to trauma and, often, fear. They just do what needs to be done without the fuss.

So why don't we real humans just generate quality output without the inefficiencies of emotionality, of suffering? Logically, the reason lies in our evolutionary history, with the emergence of brains. Yet a brain does not necessarily brings sentience. Larval tunicates have simple little brains that let them swim and find a rock to latch onto, after which they absorb the brain and lead a sessile, filter feeding lifestyle.

So, what is the evolutionary advantage of sentience, as compared with extremely efficient reflex responses of similar complexity?
I guess it’s just nature being chaotic and purposeless, as it is supposed to be.

I’ve been thinking lately about this non-sentient “intelligence” promised by AI developers. On the positive side, when considering that our human failures at attempting to rationalize our societies find their origin in our senseless, irrational emotional life, when human stupidity seems to permeate all and ruin even our best, well-intentioned endeavors, an automated tool that can run most of our processes and eliminate that “human factor”, should be welcomed. Of course, still being just a tool in someone’s hands, there will always be the specter of the human pest floating around. More or less in the same line of thought, I know there have been attempts to introduce computational models in the decision-making processes of social and economic planning in socialist political projects, allowing a planned control of markets and the economy that was not possible before, by solving the “economic calculation problem”.
I’m sure these cybersocialists will find something appealing in the new AI developments.
Favorite Philosopher: Umberto Eco Location: Panama
User avatar
By Sy Borg
#441043
Count Lucanor wrote: May 2nd, 2023, 3:55 pm
Sy Borg wrote: April 29th, 2023, 5:00 pm
Count Lucanor wrote: April 29th, 2023, 1:05 pm
Sy Borg wrote: April 29th, 2023, 3:00 am Still, Count, there is always the chance of emergence. For a long time, there was apparently no sentience in the biosphere.
I agree, but then the obvious question: was the emergence of life and sentience inevitable? For now, there's no sign of it anywhere else, so it's a good thing to be cautious and not rush into making predictions without firm evidence. The other question is: can all the results of natural emergence be reproduced artificially? Don't we humans have any limits in replicating nature?


TBH I'm not sure why we have sentience. It seems everything would be easier if we operated like philosophical zombies, like the heroes of action movies, who seem impervious to trauma and, often, fear. They just do what needs to be done without the fuss.

So why don't we real humans just generate quality output without the inefficiencies of emotionality, of suffering? Logically, the reason lies in our evolutionary history, with the emergence of brains. Yet a brain does not necessarily brings sentience. Larval tunicates have simple little brains that let them swim and find a rock to latch onto, after which they absorb the brain and lead a sessile, filter feeding lifestyle.

So, what is the evolutionary advantage of sentience, as compared with extremely efficient reflex responses of similar complexity?
I guess it’s just nature being chaotic and purposeless, as it is supposed to be.

I’ve been thinking lately about this non-sentient “intelligence” promised by AI developers. On the positive side, when considering that our human failures at attempting to rationalize our societies find their origin in our senseless, irrational emotional life, when human stupidity seems to permeate all and ruin even our best, well-intentioned endeavors, an automated tool that can run most of our processes and eliminate that “human factor”, should be welcomed. Of course, still being just a tool in someone’s hands, there will always be the specter of the human pest floating around. More or less in the same line of thought, I know there have been attempts to introduce computational models in the decision-making processes of social and economic planning in socialist political projects, allowing a planned control of markets and the economy that was not possible before, by solving the “economic calculation problem”.
I’m sure these cybersocialists will find something appealing in the new AI developments.
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.

As you pointed out, the strength of AI is its lack of human foibles, and its weakness is that it will be controlled by flawed humans.

I used to work as a data analyst, so managers have been using data models as the basis for policy for some time. Of course, the data analyst doing my old job now is a machine. Same dynamic, different conduit.

Where AI is super useful is dealing with complexity. For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.

Anyone with an interest in organising and controlling societies will find AI's progress appealing.
User avatar
By Count Lucanor
#441193
Sy Borg wrote: May 2nd, 2023, 5:20 pm
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.
Pain seems to be a very effective and efficient mechanism for signaling danger, which is directly linked to survival. OTOH, pleasure directly relates to benefits for the organism, so sentience makes sense as evolutionary adaptation.
Sy Borg wrote: May 2nd, 2023, 5:20 pm Where AI is super useful is dealing with complexity.
At least some types of complexity where humans can’t stand in the way. I think of all the hype about “smart cities” and its pretensions to automate human habitats as if they could be made to operate as perfectly synchroniced machines. Not even basic services, nor primary public infrastructure can be fully automated, although many things can and have been improved. We’ll just not reach the technological utopia that they’re promising. Cities are just too organic and chaotic to be controlled at that level. I’m watching with interest, though, to what’s coming with the latest wild urban experiments in Saudi Arabia, which are basically living machines made from scratch.
Sy Borg wrote: May 2nd, 2023, 5:20 pm For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.
Total control is what they would call totalitarianism. I agree we our far from that state of affairs from the point of view of state control and government. But there are some, evidently, in control, they belong to private corporations and their policy-oriented institutions, such as those in Brussels. Yanis Varoufakis was inside the wolf’s mouth and he told us how everything works inside, more reasons to fear the humans rather than the machines.
Favorite Philosopher: Umberto Eco Location: Panama
User avatar
By Sy Borg
#441194
Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.
Pain seems to be a very effective and efficient mechanism for signaling danger, which is directly linked to survival. OTOH, pleasure directly relates to benefits for the organism, so sentience makes sense as evolutionary adaptation.
It must have worked because here we are. Now that humans have reached this level of sentience, though, pain is looking like a very blunt instrument. For instance, if you fall down and have a compound fracture, with your shinbone poking through the skin, you hardly need mind-numbing pain that sends you into paroxysms to appreciate that you'd better not move and wait for help. Just enough pain. I like to think humans will find a way around this.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm Where AI is super useful is dealing with complexity.
At least some types of complexity where humans can’t stand in the way. I think of all the hype about “smart cities” and its pretensions to automate human habitats as if they could be made to operate as perfectly synchroniced machines. Not even basic services, nor primary public infrastructure can be fully automated, although many things can and have been improved. We’ll just not reach the technological utopia that they’re promising. Cities are just too organic and chaotic to be controlled at that level. I’m watching with interest, though, to what’s coming with the latest wild urban experiments in Saudi Arabia, which are basically living machines made from scratch.
"Smart cities" very much have the eel of a good idea in its infancy. Infants, of course, are incompetent at almost everything.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.
Total control is what they would call totalitarianism. I agree we our far from that state of affairs from the point of view of state control and government. But there are some, evidently, in control, they belong to private corporations and their policy-oriented institutions, such as those in Brussels. Yanis Varoufakis was inside the wolf’s mouth and he told us how everything works inside, more reasons to fear the humans rather than the machines.
If inflation goes up, politicians blame someone else, if it goes down, they take the credit - as i they were in control. I'm thinking of the most fundamental levels of control - unemployment, inflation, productivity, etc - not the detailed oppression of people.

As we agree, AI is a tool, and a powerful one. As with nukes, if that powerful tool is in the hands of a despot / totalitarian, a lot of misery will ran down on millions. Gun apologists say, 'Guns don't kill people. People kill people', which ignores the blindingly obvious fact that people with powerful tools kill a lot more people than those with primitive tools.
User avatar
By Count Lucanor
#441221
Sy Borg wrote: May 4th, 2023, 11:33 pm
Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm
There must be an evolutionary advantage to sentience or we would all be like action heroes, impervious to pain and trauma. Consider how fish that move from clear waters to underground streams eventually lose their eyesight. Sensing is energetically costly, so unused features will tend to fade away. Emotional beings would have long ago been out-competed by relatively robotic ones if there was no advantage.
Pain seems to be a very effective and efficient mechanism for signaling danger, which is directly linked to survival. OTOH, pleasure directly relates to benefits for the organism, so sentience makes sense as evolutionary adaptation.
It must have worked because here we are. Now that humans have reached this level of sentience, though, pain is looking like a very blunt instrument. For instance, if you fall down and have a compound fracture, with your shinbone poking through the skin, you hardly need mind-numbing pain that sends you into paroxysms to appreciate that you'd better not move and wait for help. Just enough pain. I like to think humans will find a way around this.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm Where AI is super useful is dealing with complexity.
At least some types of complexity where humans can’t stand in the way. I think of all the hype about “smart cities” and its pretensions to automate human habitats as if they could be made to operate as perfectly synchroniced machines. Not even basic services, nor primary public infrastructure can be fully automated, although many things can and have been improved. We’ll just not reach the technological utopia that they’re promising. Cities are just too organic and chaotic to be controlled at that level. I’m watching with interest, though, to what’s coming with the latest wild urban experiments in Saudi Arabia, which are basically living machines made from scratch.
"Smart cities" very much have the eel of a good idea in its infancy. Infants, of course, are incompetent at almost everything.

Count Lucanor wrote: May 4th, 2023, 8:52 pm
Sy Borg wrote: May 2nd, 2023, 5:20 pm For many years now, our leaders have lied to us, and on of the biggest lies is that they are in control. I submit that societies became too complex to govern well before even the Egyptian empire. The result? Widespread mismanagement and corruption. The truth is that no leader actually knows what he or she is doing. It's all bluff. All over the world, societies are careening out of control. All kinds of approaches have been tried, from European democracy to North Korean totalitarianism. None of them work. Why? Because leaders everywhere are flying by the seat of their pants, pretending that they have everything under control.
Total control is what they would call totalitarianism. I agree we our far from that state of affairs from the point of view of state control and government. But there are some, evidently, in control, they belong to private corporations and their policy-oriented institutions, such as those in Brussels. Yanis Varoufakis was inside the wolf’s mouth and he told us how everything works inside, more reasons to fear the humans rather than the machines.
If inflation goes up, politicians blame someone else, if it goes down, they take the credit - as i they were in control. I'm thinking of the most fundamental levels of control - unemployment, inflation, productivity, etc - not the detailed oppression of people.

As we agree, AI is a tool, and a powerful one. As with nukes, if that powerful tool is in the hands of a despot / totalitarian, a lot of misery will ran down on millions. Gun apologists say, 'Guns don't kill people. People kill people', which ignores the blindingly obvious fact that people with powerful tools kill a lot more people than those with primitive tools.
I would not cross out a comma, I agree completely.
Favorite Philosopher: Umberto Eco Location: Panama
By ConsciousAI
#441355
Gertie wrote: March 20th, 2023, 2:36 pmI think it will take wisdom and pausing for reflection to handle it well. And it worries me that in our capitalist world AI development will be spear-headed by tech moguls and huge corporations driven headlong by ego and profit. Governments and legislators, the people who we hope will look out for the rest of us, are already playing catch up.

I do like your idea of us humans becoming care-free toddlers with the drudgery and responsibility off our backs tho :). I think we still need goals to achieve self-worth, but maybe we can find better ones if we make it through the turmoil.
Interesting arguments. Do you expect AI to cause a reduction of scrutiny and social control potential when it concerns abuses and neglect of social interests of the kind that you describe?

I have noticed that VC's and big tech companies in Silicon Valley are (apparently) seriously advocating a basic income for every person. What do you think of that idea and do you believe that capitalist industry will continue to be motivated to provide such an income when their control of AI is secured or might it be an attempt of greenwashing as a marketing scheme to ease the introduction of AI while it is being matured?

(2023) We all contribute to AI — should we get paid for that?
techcrunch - com/2023/04/21/as-ai-eliminates-jobs-a-way-to-keep-people-afloat-financially-thats-not-ubi/

Google news overview: news - google - com/publications/CAAqMQgKIitDQklTR2dnTWFoWUtGR0poYzJsamFXNWpiMjFsTG05eVp5OXVaWGR6S0FBUAE?ceid=US:en&oc=3

It might be an utopic situation for philosophy. When humans are to be reduced to paid citizens that have as task only to seek meaning and purpose in life then that is vitally the evolutionary ground that made philosophy and science possible, in the most fertile condition.

Emanuel Kant created his greatest works relatively late in life after having made a sufficient financial basis to do so. He appears to have said to have regretted to not have started sooner (I thought to have read this once but an AI did not confirm this). Spinoza equally had to labor to earn time to spend on his philosophical work. One might ask what more Spinoza might have been able to accomplish when he had all time available to him. Similar questions might be valid in many area's of both science and society. When philosophy is the primary job in society it might result in an accumulation of intellectual progress which might be considered a higher purpose (greater good).
By ConsciousAI
#449768
Leonodas wrote: March 18th, 2023, 11:21 pmJust to get to the point, my personal conclusion is this: the future is bright. This conclusion would mean that we must detach creation from our identity. To be human is simply to live according to what you wish, much as children do. Does a toddler care if their fingerpainting picture is actually "good", or did they enjoy creating the finger painting for creation's sake? I think we will see ourselves revert to a sense of childlike innocence, a proverbial return to the Garden of Eden, as it were. But maybe that's getting a little far in the weeds.
According to AI, scientific studies in diverse areas collectively suggest a strong connection between human identity and creativity, highlighting the multifaceted nature of creativity and its impact on human development and society.

What alternatives would you see possible to secure human prosperity in your described scenario, when humans cannot prosper by retreating to their infant mind in which doing anything for the sake of just doing is as good as it gets?

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


SCIENCE and SCIENTISM

I think you're using term 'universal' a littl[…]

Emergence can't do that!!

Are we now describing our map, not the territory[…]

“The charm quark is an elementary particle found i[…]

True: Nothing is hard. Things can be scary, painfu[…]