Gertie wrote: ↑September 12th, 2021, 4:00 am
So we've broadly sorted our justification for morality, what it's for in principle - promoting interests/wellbeing in social settings. We've noted the roles different categories of sentient creatures may play (temporarily/permanently in relation to our foundational justification. And we've acknowledged this moral foundation confers moral consideration on all, and duties/obligations/oughts on some. That's a solid place to start thinking about what sort of oughts will be derived, and how we might codify them.
That about right?
Yes. The obligations falling upon those capable of understanding them (moral agents). A terminological point: I use "obligation" to refer to both
duties (things one must do --- actions one must take in certain circumstances), and
constraints (things one must not do).
Well, that would not be a "role for Rights," unless you're proposing, as have others here, to re-define that word. The role of rights as classically understood has been to identify what belongs to whom (as determined by the first possession criterion) and forbid others from taking those things. The term carries no implication of a duty of charity --- to see that anyone's "basic welfare needs are met." Nor does it entail an obligation upon anyone to give anyone an "opportunity to flourish and pursue their interests." There may be some other way to derive such duties from the axiom, but it would not be via rights. No one has a "right" to the services of other people or to the products of their labor, whatever their needs may be. Rights impose constraints, but no duties.
I'm suggesting we use Rights based on our moral foundation, we don't need to be bound by others in the past who made up rights based on a different foundation or conception of morality. But OK, we don't have to call them Rights, we can call them Foundational Entitlements - or .... something better lol. The point is to establish a means of ensuring that basic welfare needs are met and sentient creatures have the opportunity to flourish. Regardless of the whims and compromises of governments/authorities. It's about establishing a baseline all sentient creatures should in principle be accorded, before the societal trade-offs involved with competing interests is addressed.
I think this logically follows from our foundation . . .
Well, there is the rub --- to SHOW how it logically follows, given the Equal Agency postulate (which I assume you accept).
We know there will inevitably be trade-offs because of the nature of being an experiencing subject with individual interests. And there will be difficulties quantifying the qualiative nature of interests and weighing them against each other.
That is more than difficult --- it is logically impossible. That is the classical problem of welfare economics --- the lack of a cardinal measure of utility. Here is one brief summary:
"Cardinal utility is an attempt to quantify an abstract concept because it assigns a numerical value to utility. Models that incorporate cardinal utility use the theoretical unit of utility, the util, in the same way that any other measurable quantity is used. For example, a basket of bananas might give a consumer a utility of 10, while a basket of mangoes might give a utility of 20.
"The downside to cardinal utility is that there is no fixed scale to work from. The idea of 10 utils is meaningless in and of itself, and the factors that influence the number might vary widely from one consumer to the next. If another consumer gives bananas a util value of 15, it doesn't necessarily mean that the individual likes bananas 50% more than the first consumer. The implication is that there is no way to compare utility between consumers."
https://www.investopedia.com/ask/answer ... nomics.asp
Here is a more thorough discussion:
https://en.wikipedia.org/wiki/Cardinal_utility
From the above: "During the second half of the 19th century many studies related to this fictional magnitude—utility—were conducted, but the conclusion was always the same: it proved impossible to definitively say whether a good is worth 50, 75, or 125 utils to a person, or to two different people. Moreover, the mere dependence of utility on notions of hedonism led academic circles to be skeptical of this theory."
This problem ensues directly from the subjectivity of values. The value of
x can only be defined relative to some valuer, and only measured by observing what that valuer is willing to give up to secure
x. There is a "hierarchy of values" attached to every moral agent (and subject); we can discern its structure --- how different things rank within it -- for a given agent by observing his behavior, what
x he will give up to secure
y. That allows an
ordinal ranking of utility, but only within a given agent's hierarchy. To make things more complicated, value hierarchies are volatile. The value P assigns to
x today may not be the same tomorrow. The rankings of things within the hierarchy shift over time, new items are added and others dropped, their value becoming zero for that agent (hence discarded items and abandoned property).
The closest we can come to making interpersonal comparisons of utility is by observing behavior in a market. If Alfie will trade a record album to Bruno for book, we will know that Alfie values the book more than the album, and Bruno the album more than the book. Both of them acquire something they value more than the thing they gave up. At least, at that moment. But that observation tells us nothing about how those items rank within each agent's hierarchy.
If your approach depends on those trade-offs you mentioned you have set yourself a formidable problem.
A baseline will ensure that these trade offs never go so far that the foundational basis of morality is traded away for anyone . . . It seems a logical first step to me when we're starting to look at what oughts arise from our foundation, and how we might codify them. You have your Equal Agency Postulate, Duty of Care and so on. I'm suggesting lets get a our moral safety net in place first.
Well, we can't do that, Gertie, not logically. You're proposing to build to build moral obligations into the postulates of the theory. But that is question-begging. They have to be derived from the axiom and from
postulates that are morally neutral.
The Equal Agency postulate is morally neutral; it derives from the definition of "moral agent," which is purely descriptive of a certain category of beings. The Duty to Aid (a theorem) is derived from the Axiom.
So why isn't this an appropriate moral baseline to strive for in your view, and/or if I propose to give it this special right-like status, what are the problems?
Does the above answer that?