good egg
I think we've at least partial agreement here, although the points of difference may be more interesting, because that's what challenges our thinking.
I think the underlying prob is deriving Oughts from the Is state of affairs. For reason to get traction on that problem it needs some foundational, quasi axiomatic justification.
Agree that reason needs something to "get traction on", but I see that something as moral perception or moral intuition, rather than any philosophical axiom about the nature of consciousness.
Yes you can come up with any foundational moral grounding to reason from, because morality is a concept we created, not something 'out there' to be discovered. I've explained why I think mine is the appropriate one, in a nutshell it matters how we treat conscious creatures because we are capable of suffering or flourishing, and hence have interests in the 'Is' state of affairs. That stake in the state of affairs is why it matters how we treat conscious beings, and this is what bridges the Is-Ought divide. That's why we have moral duties towards each other, but not towards rocks or toasters. When the universe was only rocks and gases interacting according to physics, the concept of morality was irrelevant. If conscious life hadn't evolved, morality would have remained irrelevant.
My problem with your grounding is as I said, that our species' particular moral intuitions are a result of our species' evolutionary happenstance. (If lions had become the species smart enough to conceptualise right and wrong, their moral intuitions would likely be very different, and lion intuitive eternal moral truths would be nothing like ours). Human evolution has bequeathed us a mixed bag of selfish and social instincts, on the basis of evolutionary utility, not perceiving moral truths. We generally consider our pro-social instincts to be moral, and the more ancient 'selfish' ones wrong. The resulting human neurobiology was a good fit for small tribal communities, it privileges the welfare of those genetically closer to us, it works well in up close and personal situations and it values tribalism. Is that a good fit for our modern globalised world of inter-dependance on strangers? It underlies slavery for example, and fighting competing tribes for resources, just as it underlies a parent caring for their child, a community caring for their own sick members. Eternal truths can't adjust.
[You might be interested in Haidt et al Moral Foundations Theory for a broad categorisation of the evolution of moral intuitions
https://moralfoundations.org/ . Of course environmental influences play a large part too]
If a Buddhist goes out of his way to avoid stepping on an ant, then I think we can recognise that as a moral act without imputing any form of consciousness to the ant.
Not according to my moral foundation.
And from there we can reason our way through the morality of particular scenarios, and to ought rules of thumb principles.
Agree that the principles we hold are reasoned-to, induced from our first- and second-hand experience of perceiving acts as morally wrong.
The question is whether the principles we reach and hold can be incorrect. Is there a reality that they can be judged as adequate to ?
Well the moral foundation is the touchstone for testing right and wrongness. It's what you reason from, and check back with once you can evaluate consequences, enabling you to re-think your rule of thumb principles in action. Not a mish-mash of sometimes conflicting intuitions which weren't 'designed' for all the sorts of scenarios we encounter in the modern world.
as my foundation is inherently consequentialist, there may be situations where it's the lesser of two evils.
Don't think you need to be a consequentialist for that to occur. It's a feature of rule-following and virtue-seeking types of ethic also.
My view is that any moral theory which attempts to justify Oughts should by its nature ultimately be consequentialist.
But there are some uncomfortable issues with my position. One being that consequentialism requires reliable prediction.
There are bigger issues than that.
Consequentialism, as I understand it, says that it is morally right to execute an innocent man if it will prevent a riot in which N people are likely to be killed, for a sufficiently large value of N.
I suggest that the uncertainty of the prediction isn't the primary reason for rejecting such an ethic. It would be morally wrong to conduct such an execution even if one were magically certain of the outcome.
That's utilitarianism, but yes, my moral foundation doesn't escape such dilemmas. I think this 'the one vs the many'' problem is inherent in moral theories, and mine leans towards utilitarianism. I can kind of fudge my answer, by saying if this was a normalised practice then the overall consequences might well be worse. Overall for a society to flourish, the assurance and stability provided by certain rights-like proscriptions and prescriptions are justified. But there's no calcuable easy answer.
And another is that conscious experience isn't measurable in the way physical stuff is, so when comparing competing goods or harms there is no equation or calibration to rely on. It's weighing competing goods/harms against each other without a weighing machine.
True.
But we now have the outline of an evolutionary account of human 'moral intuitions'. If our moral consensus derives from our species' evolution, honed by environmental circumstances, we're reasoning and finding consensus from a foundation of evolutionary happenstance. (As it happens we're a social species who form bonds and care about others, in particular ways relating to our tribal past and resulting neurobiology, which are a different kettle of fish to eternal moral truths).
Evolution is irrelevant. I think you're contradicting your earlier statement that the foundation is the nature of consciousness.
No there's a difference in saying we are worthy of moral consideration because we evolved to experience flourishing and suffering, and saying we evolved to intuit eternal moral truths.
If in some sci-fi future you were to meet an android that had been constructed rather than having evolved, I suggest that your moral duties to such a creature would not be affected by that lack of evolutionary process.
It would depend if the android had qualiative conscious experience. If it didn't, it would be no reason for me to treat it kindly, any more than my toaster. If it did have conscious experience, it would be due moral consideration commensurate with the nature of android consciousness. (How we could know if it's conscious is another question).
It's the having of qualiative conscious experience which gives a being a stake in the state of affairs, and makes it matter how I treat a person, an ant, an android. Harris calls it ''The Wellbeing of Conscious Creatures'' which is a pretty nifty summary. And Goldberg has some interesting thoughts on morality and mattering. I don't completely align with them, but this is where modern moral philosophy should be heading imo.
Eternal moral truths which exist out there somewhere we can distantly perceive are a better fit with a perfectly good all knowing god as their source, which can never be wrong, and supercedes our fallible mortal concerns.
Not a valid argument. You haven't ruled out the possibility of eternal moral truths without a deity. This is guilt by association, smearing the concept of objective morality with what you perceive to be the faults of religion.
If you think physical truths like gravity can exist without god, then you need a good argument why moral truths cannot equally do so.
Not a valid argument. Right and Wrong and Oughts are human concepts, they don't exist 'out there' to be objectively observed and measured. Which means some other type of justification has to be given for them. Mine is that qualiative experience brings meaning and mattering into the world, and this stake in the state of affairs is the appropriate basis for Oughts. Yours is that we intuit eternal moral truths, so they must exist, despite there now being an evolutionary explanation for those intuitions.