@
Gertie , I think we've at least partial agreement here, although the points of difference may be more interesting, because that's what challenges our thinking.
I think the underlying prob is deriving Oughts from the Is state of affairs. For reason to get traction on that problem it needs some foundational, quasi axiomatic justification.
Agree that reason needs something to "get traction on", but I see that something as moral perception or moral intuition, rather than any philosophical axiom about the nature of consciousness.
My claim is that it is the nature of being an experiencing subject which is the appropriate grounding for morality.
If a Buddhist goes out of his way to avoid stepping on an ant, then I think we can recognise that as a moral act without imputing any form of consciousness to the ant.
And from there we can reason our way through the morality of particular scenarios, and to ought rules of thumb principles.
Agree that the principles we hold are reasoned-to, induced from our first- and second-hand experience of perceiving acts as morally wrong.
The question is whether the principles we reach and hold can be incorrect. Is there a reality that they can be judged as adequate to ?
as my foundation is inherently consequentialist, there may be situations where it's the lesser of two evils.
Don't think you need to be a consequentialist for that to occur. It's a feature of rule-following and virtue-seeking types of ethic also.
But there are some uncomfortable issues with my position. One being that consequentialism requires reliable prediction.
There are bigger issues than that.
Consequentialism, as I understand it, says that it is morally right to execute an innocent man if it will prevent a riot in which N people are likely to be killed, for a sufficiently large value of N.
I suggest that the uncertainty of the prediction isn't the primary reason for rejecting such an ethic. It would be morally wrong to conduct such an execution even if one were magically certain of the outcome.
And another is that conscious experience isn't measurable in the way physical stuff is, so when comparing competing goods or harms there is no equation or calibration to rely on. It's weighing competing goods/harms against each other without a weighing machine.
True.
But we now have the outline of an evolutionary account of human 'moral intuitions'. If our moral consensus derives from our species' evolution, honed by environmental circumstances, we're reasoning and finding consensus from a foundation of evolutionary happenstance. (As it happens we're a social species who form bonds and care about others, in particular ways relating to our tribal past and resulting neurobiology, which are a different kettle of fish to eternal moral truths).
Evolution is irrelevant. I think you're contradicting your earlier statement that the foundation is the nature of consciousness.
If in some sci-fi future you were to meet an android that had been constructed rather than having evolved, I suggest that your moral duties to such a creature would not be affected by that lack of evolutionary process.
Eternal moral truths which exist out there somewhere we can distantly perceive are a better fit with a perfectly good all knowing god as their source, which can never be wrong, and supercedes our fallible mortal concerns.
Not a valid argument. You haven't ruled out the possibility of eternal moral truths without a deity. This is guilt by association, smearing the concept of objective morality with what you perceive to be the faults of religion.
If you think physical truths like gravity can exist without god, then you need a good argument why moral truths cannot equally do so.