GE Morton wrote: ↑September 18th, 2021, 11:58 pm
Gertie wrote: ↑September 16th, 2021, 5:45 am
We have a decent, moral foundation in place we agree on then, which we agree is the appropriate justification for oughts. Imo this is crucial, and one of the biggest probs we face philosophically re morality. It doesn't matter if we call it subjective or objective, it's universal and solves the problem of moral relativism. And it gives us a foundation to build Oughts from. Which can be in various forms, such as rights, laws, social norms, institutional good practice, education, etc.
Well, whether those "oughts" (laws, norms, etc.), to the extent they are followed, do or do not further the goal stated in the Axiom is usually empirically verifiable, and thus objective.
So lets get get to thinking how we do that. I've suggested the logical place to start is to establish basic Rights/Entitlements which are so necessary to promoting the welfare of sentient creatures and the ability to persue their interests that no government or authority should over-rule them. I don't claim to have a complete list, but some are obvious. The in principle, where conditions allow, right to life, the right to safe shelter, sustenance, healthcare and education. Justified by our specific moral foundation.
You can't ignore the other postulates of the theory, such as the Equal Agency postulate, the Relativity postulate, and the postulate of Individuality. The Relativity postulate asserts that the components of welfare --- what counts as a "good" or an "evil," and the values (positive or negative) thereof, are subjective and relative to agents. The Individuality postulate asserts that what are counted as goods and evils, and the values attached to them, differ from agent to agent.
The Equal Agency postulate rules out any "ought" that would improve the welfare of one agent by reducing the welfare of another agent. The Relativity postulate rules out assigning a value to any good a priori, and the Individuality postulate rules out assuming that the values assigned to any goods or evils are universal among agents. Hence any "oughts" which depend upon such assignments and assumptions are also ruled out. As with all other goods, the value, and rank in his hierarchy, of shelter, sustenance, etc., to Alfie can only be decided by him, and likely differs from the values assigned to those things by other agents. That holds, BTW, for the value of his own life. We can't place a value on it that overrides the value Alfie himself places on it (which will determine what risks he is willing to take with it). We can sometimes make those decisions for moral subjects --- young children and animals --- but not for other moral agents.
Nor can we assume that Alfie places a high value, or any value, on Bruno's life. He may or may not.
Those postulates also rule out the utilitarian principle ("greatest good for greatest number") which you may be tempted to invoke. Since goods are subjective, what is the "greatest good" can only be determined for each individual, and determining that would be an impossible task in any large society. As Rawls observed in his Theory of Justice, "Utilitarianism does not take seriously the distinction between persons." (Rawls himself does not take it seriously enough).
I'm suggesting we use Rights based on our moral foundation, we don't need to be bound by others in the past who made up rights based on a different foundation or conception of morality. But OK, we don't have to call them Rights, we can call them Foundational Entitlements - or .... something better lol. The point is to establish a means of ensuring that basic welfare needs are met and sentient creatures have the opportunity to flourish. Regardless of the whims and compromises of governments/authorities. It's about establishing a baseline all sentient creatures should in principle be accorded, before the societal trade-offs involved with competing interests is addressed.
There are no universal "basic welfare needs." Needs are generated by and dependent upon wants --- they are means to ends --- and wants are subjective and idiosyncratic. True, humans (and other animals) need food, water, and oxygen --- but only if they want to continue living. The "needs" you identify are those of persons who want lifestyles characteristic of "middle-class" citizens of modern Western societies. But the moral question does not concern what people want, and may consequently need. The moral question concerns who is obligated to meet these diverse wants and needs. Again, you must either take the postulates seriously, or abandon one or more of them. Per what principle would Alfie become obligated to satisfy Brunos' wants, at the cost of satisfying his own?
The usual route followed by utilitarians seeking to avoid this choice is by ranking the various wants people may have, and declaring that some outrank others. But given the lack, as I've mentioned, of any objective means of measuring cardinal utility, any such ranking will be arbitrary. The only ranking of interests which can be objectively verified is the ranking each agent assigns the various interests in his own hierarchy.
Well, there is the rub --- to SHOW how it logically follows, given the Equal Agency postulate (which I assume you accept).
(As I understand it, your Equal Agency Postulate simply states all agents are equally obligated to follow the oughts resulting from our foundation, yes? That makes sense to me in principle ).
See above. It logically follows because it strives to ensure each sentient creature has the necessary and sufficient conditions for well-being/persuing their interests - our moral foundation. What comprises basic necessary and sufficient conditions might be blurry, but it shouldn't be hard to agree on things like food, shelter, education, healthcare.
The Equal Agency postulate entails more than that. It also implies the interests of all agents have the same rank; that all are equally entitled to pursue their interests, whatever they may be. The theory is not concerned with interests; it is only concerned with the actions people take to pursue them, when those actions impinge on others' efforts to pursue their interests. Here is the full statement of that Postulate:
"5. Postulate of Equal Agency: All agents in the moral field are of equal moral status, i.e., all duties and constraints generated by the theory are equally binding on all.
Corollary: Postulate of Neutrality: The theory is neutral as between goods and evils, and the values thereof, as defined by agents."
"Cardinal utility is an attempt to quantify an abstract concept because it assigns a numerical value to utility...
If your approach depends on those trade-offs you mentioned you have set yourself a formidable problem."
I agree! It's the one serious problem I think we have with this moral foundation. But it's a problem inherent to a wellbeing/interests based moral foundation. We're stuck with it. So we either ditch the foundation for something tidier, or do our best.
The problem is not with the foundation, but with deriving obligations from it without paying cognizance to the Postulates.
I think you'd say there is a right not to pay taxes at all unless voluntarily. Because there is a right to freedom or property (eg tax money) which over-rides the right/entitlement to have basic welfare needs met. So what is our touchstone for settling such disputes? Our moral foundation of promoting the well-being/interests of sentient creatures. You need to justify your position in the terms of the foundation.
Not so with respect to taxes. The test of whether a tax is justifiable, and may morally be enforced, is whether the tax pays for services that benefit the taxpayer. They are not justifiable if paying for services which benefit someone else, i.e., if they reduce Alfie's welfare to improve Bruno's. That is a violation of the Equal Agency postulate and the Axiom itself, which commands rules that promote the welfare of all agents, not of some agents at the expense of others.
I should point out that the Duty to Aid commands generosity; it is not limited to commanding aid only in dire emergency situations. One ought to offer aid whenever one can do so without (in Peter Singer's words) "thereby sacrificing anything of comparable moral importance." But only the acting agent himself can do that weighing and thus make that judgment; for others to force their judgment upon him denies his status as an equal moral agent.
Sorry for the delayed response. Reponding to your posts requires more thought, and thus more time, than most.
Apologies for leaving this unanswered GE, life can annoyingly interrupt sometimes, sorry!
Looking back I don't see how my arguments can move you as long as you're welded to the package of postulates you've arrived at, and I don't agree that they have to follow from the foundation which we do agree on. And while individual idiosyncrasies should be allowed for, there are morally significant differences between some needs/desires than others, even if they aren't objectively quantifiable. And imo it's better to imperfectly wrestle with the messiness, than create tidy theoretical lines.
So if we take homelessness, nearly everybody would feel that having a home is more important to their welfare and ability to flourish, than being able to have their favourite flavour of ice cream, as an obvious comparison. Nobody feels a moral obligation to ensure everybody is able to have their fave ice cream based on welfare and flourishing, and nearly everybody feels there is a moral obligation to to sacrifice their shoes to save a drowning child. Nearly everything else is somewhere in between. Being homeless will likely affect your physical and mental health, your ability to find and maintain a decent income, may lead to crime, addiction, sex work, and being preyed upon by criminals. Having your kids taken into care, and/or your kids' life chances being harmed. This seems like an obvious case for moral obligation to me.
And if we're serious about it, leaving it to ad hoc acts of charity/generosity is insufficient, we know that. So the only objection I see to using taxes, is some in principle objection to ever being forcibly obliged to sacrifice to help another. But if our foundation is welfare based, my sacrifice of a bit more in taxes has a minimal effect on my welfare, and a radical effect on homeless people.