Page 12 of 30

Re: Are there eternal moral truths?

Posted: March 17th, 2022, 10:34 am
by Good_Egg
CIN wrote: March 17th, 2022, 5:17 am As a consequentialist, I believe I ought to kill the healthy patient and give his organs to the other five. (We will assume that I'm clever enough to make it appear that the guy died naturally.) Am I right when I say I ought to do this, or am I wrong? If I'm wrong, why am I wrong?
You are wrong. Because you do not have the right.

The healthy patient is the person with the right to choose to bestow his organs at the cost of his life. Not you. It is not your trade-off to make. Unless he has explicitly consented.

Re: Are there eternal moral truths?

Posted: March 17th, 2022, 11:25 am
by Pattern-chaser
Good_Egg wrote: March 17th, 2022, 10:34 am
CIN wrote: March 17th, 2022, 5:17 am As a consequentialist, I believe I ought to kill the healthy patient and give his organs to the other five. (We will assume that I'm clever enough to make it appear that the guy died naturally.) Am I right when I say I ought to do this, or am I wrong? If I'm wrong, why am I wrong?
You are wrong. Because you do not have the right.

The healthy patient is the person with the right to choose to bestow his organs at the cost of his life. Not you. It is not your trade-off to make. Unless he has explicitly consented.
Yes, we all have what we were born with, and what we have acquired since. If we are unlucky enough to have kidneys that won't last as long as the rest of our bits and pieces, that's hard luck. We can't reasonably (morally) expect someone else, whose kidneys are fine, to die or be killed so that we can use their kidneys to continue living.

Re: Are there eternal moral truths?

Posted: March 17th, 2022, 1:20 pm
by CIN
Good_Egg wrote: March 17th, 2022, 10:34 am
CIN wrote: March 17th, 2022, 5:17 am As a consequentialist, I believe I ought to kill the healthy patient and give his organs to the other five. (We will assume that I'm clever enough to make it appear that the guy died naturally.) Am I right when I say I ought to do this, or am I wrong? If I'm wrong, why am I wrong?
You are wrong. Because you do not have the right.

The healthy patient is the person with the right to choose to bestow his organs at the cost of his life. Not you. It is not your trade-off to make. Unless he has explicitly consented.
By virtue of what does someone have the right to choose to bestow his own organs? What gives him that right?

Re: Are there eternal moral truths?

Posted: March 17th, 2022, 1:25 pm
by CIN
Pattern-chaser wrote: March 17th, 2022, 11:25 am We can't reasonably (morally) expect someone else, whose kidneys are fine, to die or be killed so that we can use their kidneys to continue living.
You appear to be equating reasonableness with morality. So are you claiming to derive moral principles from reason alone? If so, how would you do that? If not, what do you mean by 'reasonably'?

Re: Are there eternal moral truths?

Posted: March 17th, 2022, 4:37 pm
by CIN
Belindi wrote: March 17th, 2022, 5:29 am If you are a politician and law maker and given resources are finite you should choose the utilitarian solution to the problem including the rationalisation as you descibe.
Have you read Ursula Le Guin's short story 'The ones who walk away from Omelas'? This is about a society that enters into a contract with some unstated person or persons, the terms of which are that everyone will be made very happy if just one child is kept chained up indefinitely in a basement in utter loneliness and misery. The Omelasian solution to whatever problems they were having (we're not told what they were; maybe they were no worse than those of our own society) is the utilitarian solution: the sum of the happiness of the rest outweighs the misery of the child.

Le Guin herself symbolically walks away from the Omelas solution, and I think we should do the same. I think there is a flaw in standard utilitarianism. It is based on the premise that what matters is happiness and unhappiness, and this is a mistake, because what actually matters is not happiness and unhappiness, but the beings who are happy or unhappy; they are the ends that morally deserve our concern, not the happiness or unhappiness itself. Suppose you were Vladimir Putin, and you had a million units of unhappiness to distribute among the people of Ukraine. Standard utilitarianism holds that it makes no moral difference whether you give one unit of unhappiness to each of a million people, or the whole million to one person. But if you give more unhappiness to one person than another, you are in effect treating the one you give more unhappiness to as less of a moral end than the other, and since all beings must deserve equally to be treated as ends unless there is some good reason why they should not, this is not rationally defensible; and if it is not rationally defensible, then since morality, to be compelling, must be rational, it is not morally defensible either.

There are therefore (at least) two fundamental moral imperatives, not one: one is to maximise the sum total of happiness, and the other is to distribute happiness and unhappiness as fairly as possible. These two principles can come into conflict: it is possible to have situations where one is forced to choose between them, or to choose which should have priority. I am not aware of any objective principle that could resolve such conflicts, and I believe therefore that the choice is inescapably subjective. So although I am a moral realist up to a point, I think moral realism has its limits, beynd which subjectivism is unavoidable.

Re: Are there eternal moral truths?

Posted: March 17th, 2022, 10:54 pm
by Leontiskos
CIN wrote: March 17th, 2022, 5:17 am Well, I tell you what - let's eschew all these pleasantries and talk about a specific case - a classic scenario with which you will certainly be familiar.
I should first say that I worry you are straining the gnat and swallowing the camel. Our discussion was about the possibility of measuring utils, whether "good" means meriting a positive attitude, being desirable, or producing some form of pleasure. We were talking about whether all 'oughts' are obligatory, the difference between "good" used in the subjective vs. objective senses, and whether your monotypic categorization is useful or misleading.

I am wondering if the reason you dropped all of this and instead focused on a small, tangential piece of the conversation is because the rest of it was going poorly for you? In any case, the fact that you seem to simultaneously hold that utils both can and can't be measured--particularly across agents--is something that you will have to work out. That is a much more central problem than whether Kant only believed what he did because of psychological errors that he or his ancestors made.
I'm a surgeon. I have six young patients. One is healthy, the other five will die soon if they don't get organ transplants. If they get their transplants, there's every reason to believe that they will live healthily to a ripe old age.

As a consequentialist, I believe I ought to kill the healthy patient and give his organs to the other five. (We will assume that I'm clever enough to make it appear that the guy died naturally.) Am I right when I say I ought to do this, or am I wrong? If I'm wrong, why am I wrong?
The basic answer is that there is an asymmetrical relation between the healthy person* and the healthy person's kidney, and the terminal patient and the healthy person's kidney. The healthy person has a right to their kidney; the terminal patient does not. Nor does the surgeon.
Game on, wrestler of Messene?
Honestly, I will be out for awhile beginning in the next day or two, so if you want to continue there will be a delay.

* The healthy one should be described as a person, not a patient. This is a curious and relevant equivocation.

Re: Are there eternal moral truths?

Posted: March 18th, 2022, 3:50 am
by Good_Egg
CIN wrote: March 17th, 2022, 1:20 pm By virtue of what does someone have the right to choose to bestow his own organs? What gives him that right?
Because your body is a part of you.

I have a vague memory of an old Star Trek episode in which Spock's consciousness was transferred to another body. In such a hypothetical situation, Spock could say to you that he does not have the right to bestow the organs that are currently supporting his life, because they belong to another and his duty is to effect their return.

Re: Are there eternal moral truths?

Posted: March 18th, 2022, 7:32 am
by Belindi
CIN wrote: March 17th, 2022, 4:37 pm
Belindi wrote: March 17th, 2022, 5:29 am If you are a politician and law maker and given resources are finite you should choose the utilitarian solution to the problem including the rationalisation as you descibe.
Have you read Ursula Le Guin's short story 'The ones who walk away from Omelas'? This is about a society that enters into a contract with some unstated person or persons, the terms of which are that everyone will be made very happy if just one child is kept chained up indefinitely in a basement in utter loneliness and misery. The Omelasian solution to whatever problems they were having (we're not told what they were; maybe they were no worse than those of our own society) is the utilitarian solution: the sum of the happiness of the rest outweighs the misery of the child.

Le Guin herself symbolically walks away from the Omelas solution, and I think we should do the same. I think there is a flaw in standard utilitarianism. It is based on the premise that what matters is happiness and unhappiness, and this is a mistake, because what actually matters is not happiness and unhappiness, but the beings who are happy or unhappy; they are the ends that morally deserve our concern, not the happiness or unhappiness itself. Suppose you were Vladimir Putin, and you had a million units of unhappiness to distribute among the people of Ukraine. Standard utilitarianism holds that it makes no moral difference whether you give one unit of unhappiness to each of a million people, or the whole million to one person. But if you give more unhappiness to one person than another, you are in effect treating the one you give more unhappiness to as less of a moral end than the other, and since all beings must deserve equally to be treated as ends unless there is some good reason why they should not, this is not rationally defensible; and if it is not rationally defensible, then since morality, to be compelling, must be rational, it is not morally defensible either.

There are therefore (at least) two fundamental moral imperatives, not one: one is to maximise the sum total of happiness, and the other is to distribute happiness and unhappiness as fairly as possible. These two principles can come into conflict: it is possible to have situations where one is forced to choose between them, or to choose which should have priority. I am not aware of any objective principle that could resolve such conflicts, and I believe therefore that the choice is inescapably subjective. So although I am a moral realist up to a point, I think moral realism has its limits, beynd which subjectivism is unavoidable.
The question of utilitarianism is being fought out in Ukraine. No politician and many who are not politicians, including fighting heroes of Ukraine,and Zelensky himself, nobody supports military help to Ukraine that leads towards escalation into World War III. Zelensky recently urged Germany towards economic not violent sanctions.

Heroic sacrifices fit only limited circumstances such as Ursula Le Guin's scenario, and immediate and present danger to one's child in which case the visceral reaction will occur willy nilly with no need to philosophise or reflect.

Re: Are there eternal moral truths?

Posted: March 18th, 2022, 7:33 am
by Pattern-chaser
Pattern-chaser wrote: March 17th, 2022, 11:25 am We can't reasonably (morally) expect someone else, whose kidneys are fine, to die or be killed so that we can use their kidneys to continue living.
CIN wrote: March 17th, 2022, 1:25 pm You appear to be equating reasonableness with morality. So are you claiming to derive moral principles from reason alone? If so, how would you do that? If not, what do you mean by 'reasonably'?

Apologies, I used "reasonable" in an 'everyday' usage:
Chambers Dictionary wrote:Reasonable adj
1 sensible; rational; showing reason or good judgement.
2 willing to listen to reason or argument.
3 in accordance with reason.
4 fair or just; moderate; not extreme or excessive.
5 satisfactory or equal to what one might expect.
The dictionary does include a reference to "reason", but the other meanings illustrate the more dilute, and less formal, definition that I intended. Meanings 1, 4 and 5 are closer to what I intended. I am definitely not suggesting that we can, could or should "derive moral principles from reason alone".

Re: Are there eternal moral truths?

Posted: March 18th, 2022, 5:15 pm
by CIN
Leontiskos wrote: March 17th, 2022, 10:54 pm
CIN wrote: March 17th, 2022, 5:17 am Well, I tell you what - let's eschew all these pleasantries and talk about a specific case - a classic scenario with which you will certainly be familiar.
I should first say that I worry you are straining the gnat and swallowing the camel. Our discussion was about the possibility of measuring utils, whether "good" means meriting a positive attitude, being desirable, or producing some form of pleasure. We were talking about whether all 'oughts' are obligatory, the difference between "good" used in the subjective vs. objective senses, and whether your monotypic categorization is useful or misleading.
Desire is a positive attitude, so I think my view envelops yours. There are other positive attitudes - seeking out, approving, commending - and it seems to me that 'good' is a word we use for any and all of these, not just where we invoke desire. I think it is true that if something is good, then it must in some context also be desirable (I still have reservations about temporality), which can make your view seem plausible, but 'if...then' is not an equivalence relation.
Leontiskos wrote: March 17th, 2022, 10:54 pm I am wondering if the reason you dropped all of this and instead focused on a small, tangential piece of the conversation is because the rest of it was going poorly for you? In any case, the fact that you seem to simultaneously hold that utils both can and can't be measured--particularly across agents--is something that you will have to work out. That is a much more central problem than whether Kant only believed what he did because of psychological errors that he or his ancestors made.
I take the Benthamite view that differences of pleasantness and unpleasantness between experiences are purely quantitative, and I would therefore accept that utils are measurable in principle, though I have no idea whether they ever will be in practice. I also take the view that we can judge differences in intensity of pleasantness and unpleasantness, not merely for a single human (or animal sufficiently biologically similar to humans) but between different humans (ditto) to be able to make rough judgments on some occasions about total happiness or unhappiness. For instance, I think it is clear that Ukrainians being bombed and fleeing their homes are suffering more unpleasantness than Russians who are not being bombed or having to flee theirs. I think judgments like these are sufficient to make a consequentialist hedonist morality workable.

The reason I wanted to talk about the surgeon and his patients was to bring out as sharply as possible the differences between my consequentialist views and the deontological views you seem to prefer, and if possible to persuade you to take the opposite view and then justify your position. I'm not having much luck so far, but I keep hoping. In case you think I'm only doing this to win an argument, I came to this forum armed with a partly worked out consequentialist theory, in the hope that I might enlist other people's help, mostly via adversarial argument (because that is how forums like these generally work) to either improve the theory, or make it seem so implausible that I felt forced to drop it. I don't have anyone else to discuss these ideas with other than people in forums of this kind.
Leontiskos wrote: March 17th, 2022, 10:54 pm
CIN wrote: March 17th, 2022, 5:17 am [I'm a surgeon. I have six young patients. One is healthy, the other five will die soon if they don't get organ transplants. If they get their transplants, there's every reason to believe that they will live healthily to a ripe old age.

As a consequentialist, I believe I ought to kill the healthy patient and give his organs to the other five. (We will assume that I'm clever enough to make it appear that the guy died naturally.) Am I right when I say I ought to do this, or am I wrong? If I'm wrong, why am I wrong?
The basic answer is that there is an asymmetrical relation between the healthy person* and the healthy person's kidney, and the terminal patient and the healthy person's kidney. The healthy person has a right to their kidney; the terminal patient does not. Nor does the surgeon.
Yes, the relation is asymmetrical, but why is this asymmetry morally relevant?

I assume by 'right' you mean a natural right, or something of the sort. I see no reason to suppose that there are natural rights. Can you give me a reason?
Leontiskos wrote: March 17th, 2022, 10:54 pm Honestly, I will be out for awhile beginning in the next day or two, so if you want to continue there will be a delay.
No problem.

Re: Are there eternal moral truths?

Posted: March 18th, 2022, 7:33 pm
by CIN
Good_Egg wrote: March 18th, 2022, 3:50 am
CIN wrote: March 17th, 2022, 1:20 pm By virtue of what does someone have the right to choose to bestow his own organs? What gives him that right?
Because your body is a part of you.

I have a vague memory of an old Star Trek episode in which Spock's consciousness was transferred to another body. In such a hypothetical situation, Spock could say to you that he does not have the right to bestow the organs that are currently supporting his life, because they belong to another and his duty is to effect their return.
So your argument is:
1. My organs are part of me.
2. If something is a part of some person, that person, and only that person, has the right to choose to bestow that something.
3. Therefore I and only I have the right to choose to bestow my organs.

What reason is there to believe that 2. is true?

Re: Are there eternal moral truths?

Posted: March 18th, 2022, 7:54 pm
by Good_Egg
CIN wrote: March 18th, 2022, 7:33 pm 2. If something is a part of some person, that person, and only that person, has the right to choose to bestow that something.
...
What reason is there to believe that 2. is true?
It seems logically incoherent to suggest that two people can have the right to bestow the same item in different directions. If you and I choose differently, both choices cannot prevail.

[/quote]
CIN wrote: March 18th, 2022, 5:15 pm
Leontiskos wrote: March 17th, 2022, 10:54 pm The basic answer is that there is an asymmetrical relation between the healthy person* and the healthy person's kidney, and the terminal patient and the healthy person's kidney. The healthy person has a right to their kidney; the terminal patient does not. Nor does the surgeon.
Yes, the relation is asymmetrical, but why is this asymmetry morally relevant?

I assume by 'right' you mean a natural right, or something of the sort. I see no reason to suppose that there are natural rights. Can you give me a reason?
If the idea of rights appears in enough distinct cultures, is that evidence that such a concept is formed as a response to something in human nature rather than being a cultural construct?

I have heard it suggested that an ethic based on rights is merely a special case of utilitarianism. One in which breaches of rights are given a moral weight that is orders of magnitude greater than other consequences. Do you think that's true ?

It may be a different asymmetry from the one Leontiskos was thinking of, but I suggest to you that taking a life and saving a life conceivably do not have the same moral weight. In which case for you to kill B to save A and to kill A to save B can both be wrong.

Re: Are there eternal moral truths?

Posted: March 19th, 2022, 12:10 am
by Leontiskos
CIN wrote: March 18th, 2022, 5:15 pm
Leontiskos wrote: March 17th, 2022, 10:54 pm I should first say that I worry you are straining the gnat and swallowing the camel. Our discussion was about the possibility of measuring utils, whether "good" means meriting a positive attitude, being desirable, or producing some form of pleasure. We were talking about whether all 'oughts' are obligatory, the difference between "good" used in the subjective vs. objective senses, and whether your monotypic categorization is useful or misleading.
Desire is a positive attitude, so I think my view envelops yours. There are other positive attitudes - seeking out, approving, commending - and it seems to me that 'good' is a word we use for any and all of these, not just where we invoke desire. I think it is true that if something is good, then it must in some context also be desirable (I still have reservations about temporality), which can make your view seem plausible, but 'if...then' is not an equivalence relation.
As noted earlier, I don't have any significant problems with the idea that good is what merits a positive attitude. I will say that insofar as something is sought it is also desired, and we only approve or commend that which is desirable, so I don't think any of those notions diverge from desire. I have more issues with hedonism (the reduction of good to pleasure).

I will say that one way in which "meriting a positive attitude" does not track 'good' is in its abstraction. Often we use the word 'good' in a rather immanent way. For example, "This ice cream is so good!" The substitutions would be, "This ice cream very much merits a positive attitude!," versus, "This ice cream is so desirable!" The word and concept do not always carry that level of third-person abstraction from what is called 'good'. In those cases merit takes a back seat.
CIN wrote: March 18th, 2022, 5:15 pm
Leontiskos wrote: March 17th, 2022, 10:54 pm I am wondering if the reason you dropped all of this and instead focused on a small, tangential piece of the conversation is because the rest of it was going poorly for you? In any case, the fact that you seem to simultaneously hold that utils both can and can't be measured--particularly across agents--is something that you will have to work out. That is a much more central problem than whether Kant only believed what he did because of psychological errors that he or his ancestors made.
I take the Benthamite view that differences of pleasantness and unpleasantness between experiences are purely quantitative, and I would therefore accept that utils are measurable in principle, though I have no idea whether they ever will be in practice. I also take the view that we can judge differences in intensity of pleasantness and unpleasantness, not merely for a single human (or animal sufficiently biologically similar to humans) but between different humans (ditto) to be able to make rough judgments on some occasions about total happiness or unhappiness. For instance, I think it is clear that Ukrainians being bombed and fleeing their homes are suffering more unpleasantness than Russians who are not being bombed or having to flee theirs. I think judgments like these are sufficient to make a consequentialist hedonist morality workable.

The reason I wanted to talk about the surgeon and his patients was to bring out as sharply as possible the differences between my consequentialist views and the deontological views you seem to prefer, and if possible to persuade you to take the opposite view and then justify your position. I'm not having much luck so far, but I keep hoping. In case you think I'm only doing this to win an argument, I came to this forum armed with a partly worked out consequentialist theory, in the hope that I might enlist other people's help, mostly via adversarial argument (because that is how forums like these generally work) to either improve the theory, or make it seem so implausible that I felt forced to drop it. I don't have anyone else to discuss these ideas with other than people in forums of this kind.
Fair enough. It seems like you are now making the assumption that utils can be measured, at least in principle. I'd say that assumption is fully necessary for utilitarianism, so your view is looking better than it did.

On forums like these my goal is usually modest: point out contradictions and get people to clean up their thinking. So if the utilitarian is going to claim that he doesn't need to make the assumption that utils are measurable, then I will press him. Once blatant contradictions like those are addressed everything becomes more nuanced.

I tend to think that most people are consequentialists, and that any legitimate system of moral reasoning must include consequence-based reasoning. That's one reason why I think Kantianism ultimately fails: because Kant tries to avoid consequence-based reasoning altogether. But consequentialism goes hand in hand with forms of materialism, which is probably why it has become more popular since the Enlightenment. Given posts like <this one> I assume you are a materialist, and a determinist.
CIN wrote: March 18th, 2022, 5:15 pm
Leontiskos wrote: March 17th, 2022, 10:54 pm
CIN wrote: March 17th, 2022, 5:17 am [I'm a surgeon. I have six young patients. One is healthy, the other five will die soon if they don't get organ transplants. If they get their transplants, there's every reason to believe that they will live healthily to a ripe old age.

As a consequentialist, I believe I ought to kill the healthy patient and give his organs to the other five. (We will assume that I'm clever enough to make it appear that the guy died naturally.) Am I right when I say I ought to do this, or am I wrong? If I'm wrong, why am I wrong?
The basic answer is that there is an asymmetrical relation between the healthy person* and the healthy person's kidney, and the terminal patient and the healthy person's kidney. The healthy person has a right to their kidney; the terminal patient does not. Nor does the surgeon.
Yes, the relation is asymmetrical, but why is this asymmetry morally relevant?

I assume by 'right' you mean a natural right, or something of the sort. I see no reason to suppose that there are natural rights. Can you give me a reason?
My guess is that most consequentialists, including yourself, posit an asymmetrical, morally relevant relation, that could be called a "right."

For example, you used an example where the ratio of harmed:helped is 1:5. But if we change the ratio of harmed:helped to 1:1 or 1:2, what would you say? You would probably say that the fact that the healthy person possesses the healthy kidney, and the unhealthy patient does not, is a morally relevant difference, and that the special relation that the healthy person has with respect to their kidney is called a "right." A rule utilitarian might go so far as to say that the violation of one's bodily autonomy is the violation of a special utilitarian rule.

Re: Are there eternal moral truths?

Posted: March 19th, 2022, 7:35 am
by CIN
Pattern-chaser wrote: March 18th, 2022, 7:33 am
Pattern-chaser wrote: March 17th, 2022, 11:25 am We can't reasonably (morally) expect someone else, whose kidneys are fine, to die or be killed so that we can use their kidneys to continue living.
CIN wrote: March 17th, 2022, 1:25 pm You appear to be equating reasonableness with morality. So are you claiming to derive moral principles from reason alone? If so, how would you do that? If not, what do you mean by 'reasonably'?

Apologies, I used "reasonable" in an 'everyday' usage:
Chambers Dictionary wrote:Reasonable adj
1 sensible; rational; showing reason or good judgement.
2 willing to listen to reason or argument.
3 in accordance with reason.
4 fair or just; moderate; not extreme or excessive.
5 satisfactory or equal to what one might expect.
The dictionary does include a reference to "reason", but the other meanings illustrate the more dilute, and less formal, definition that I intended. Meanings 1, 4 and 5 are closer to what I intended. I am definitely not suggesting that we can, could or should "derive moral principles from reason alone".
Well, let’s go through these words.
1. sensible; rational; showing reason or good judgement.
What could be more sensible and rational than sacrificing one life to save five? That’s a net gain of four lives. This is clearly a better outcome than letting the five die and being left with only one life.
4. fair or just; moderate; not extreme or excessive
I think the surgeon is being fair. He is treating each of the six people’s lives as of equal importance, and is therefore treating all six equally as moral ends. Seems fair to me.
‘Just’ seems to have several meanings. I’d be interested to know, if you think the surgeon is being unjust, why you think this; what standard or principle of justice are you basing your judgment on?
‘moderate; not extreme or excessive’: I’m not sure how these would apply in this case. If you think they do, perhaps you could say why.
5. satisfactory or equal to what one might expect.
Satisfaction is surely subjective: some people might be satisfied with the surgeon’s solution, others obviously would not be. As for ‘equal to what one might expect’, it certainly wouldn’t be expected in our society that a surgeon would kill a healthy person to obtain their organs, but I don’t think what society at large expects is a rational basis for morality.

There seem to be several ideas here, some of them not very clear. I don’t think any of them amount, as they stand, to a coherent argument against the consequentialist view.

Re: Are there eternal moral truths?

Posted: March 19th, 2022, 10:34 am
by CIN
Good_Egg wrote: March 18th, 2022, 7:54 pm
CIN wrote: March 18th, 2022, 7:33 pm 2. If something is a part of some person, that person, and only that person, has the right to choose to bestow that something.
...
What reason is there to believe that 2. is true?
It seems logically incoherent to suggest that two people can have the right to bestow the same item in different directions. If you and I choose differently, both choices cannot prevail.
Perhaps so, but that only deals with the ‘only that person’ part of 2. If I omit that, we get:
2. If something is a part of some person, that person has the right to choose to bestow that something.
What I do not see is any reason to suppose that this is true. How do we know that there really are such rights? It looks to me as if the idea of natural rights may be a purely human invention, facilitated by an analogy with legal rights, and motivated by a desire to prevent certain types of action at all costs.
Good_Egg wrote: March 18th, 2022, 7:54 pm If the idea of rights appears in enough distinct cultures, is that evidence that such a concept is formed as a response to something in human nature rather than being a cultural construct?
For all I know, yes. But how does something being a response to something in human nature turn it into a moral imperative?
Good_Egg wrote: March 18th, 2022, 7:54 pm I have heard it suggested that an ethic based on rights is merely a special case of utilitarianism. One in which breaches of rights are given a moral weight that is orders of magnitude greater than other consequences. Do you think that's true ?
It’s true that it’s been suggested. It’s obviously just a bodge. Numerous bodges of utilitarianism have been made to try and make it fit in with widely held moral views or ‘intuitions’ (which I suspect are not intuitions in any real sense, but are simply what people were taught to believe when they were young). Firstly, if you are going to assign rights some value in the felicific calculus, you need first of all to show why they should be given a value at all. Secondly, to assign, for example, someone’s supposed right to keep their organs a value greater than the accumulated value of all the consequences of someone taking their organs and giving them to someone else is impossible, since it would require one to predict all the consequences of such an action to the end of time; the only way round this would be to give the right a value of infinity, which is meaningless. It’s a tactic of desperation, and not worth considering further.
Good_Egg wrote: March 18th, 2022, 7:54 pm It may be a different asymmetry from the one Leontiskos was thinking of, but I suggest to you that taking a life and saving a life conceivably do not have the same moral weight. In which case for you to kill B to save A and to kill A to save B can both be wrong.
Ingenious, but essentially the same problem arises as with rights. If taking a life has a moral weight or value distinct from its consequential value, by virtue of what does it have that extra value? Actions have consequential value by virtue of their property of bringing about states of affairs which themselves have value, this latter value being the value of an experience in terms of its pleasantness or unpleasantness; if the action of taking a life is to have some additional value, this must similarly derive from some property of the action, so what property could this be?