Leontiskos wrote: ↑April 3rd, 2022, 8:31 pm
The thing is, I am having trouble situating your claims regarding things such as the surgeon scenario without positing at least a quasi-collective end. For example, rather than being concerned with the individual per se, you are per se concerned with a set of individuals (or collective pleasure, or somesuch thing).
The surgeon treats all six individuals alike, as individuals, not as a set. He assigns just the same moral consideration to the healthy patient as he does to each of the five unhealthy ones. At no stage is he thinking that the patients constitute a collective which has interests over and above the individual interests of the six patients; he merely calculates that by killing the healthy patient and giving his organs to the five unhealthy ones, he alters the outcome from one living person with a life assumed to be pleasant to five such living persons.
What would you say your end is in the surgeon case? Maximal collective pleasure?
Maximising experienced pleasantness across all six patients, the five unhealthy ones and the one healthy one.
I hope I am not forgetting parts of our conversation after this long lapse of time, but part of my point with the bee hive analogy is that the maximization of pleasure seems like something quite different from the colloquial understanding of morality. Most people would say that seeking pleasure and avoiding pain is intuitive but not specifically moral. "Do good and avoid evil" is a moral principle, but good and evil collapse into pleasure and pain then again, it would not strike the average person as specifically moral.
I think most people would probably say that seeking pleasure and avoiding pain for oneself was not specifically moral; but what about doing these things for other people? I think they might consider that to be moral.
But in any case, is it reasonable to insist that a theory in moral philosophy should conform to the moral views of the average person, who is unlikely to have ever studied philosophy and is also unlikely to have thought about the matter very deeply? Is there any other field in which you would expect theories to conform to the opinions of the uneducated?
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm
I should probably take more time to think on this, but at present I would say that when we are talking about interpersonal moral systems what we are really talking about is justice, and that the positive role of societal justice is to rectify moral debts (injustices).
You snuck in the word 'societal' before the second occurrence of 'justice', and hence I think you are using the word 'justice' in two different senses. You go on to talk about legitimacy residing with a public authority, a notion which fits reasonably well with the idea of societal justice, but not with a notion of justice that makes it co-terminous with morality as as whole; morality includes such matters as whether I should keep my promise to my wife to pick her up from the hairdresser, something that is hardly going to be of interest to the public authorities. Since your view of morality here seems to depend on 'justice' meaning the same throughout, I think your view cannot be correct.
It follows from this that any legitimate or justified harm can only be meted out by the public authority in the rectification of an injustice. For example, we can only fine individuals who have broken the law; we cannot fine individuals who are innocent of any transgression.
But if, as I have suggested, morality is broader than societal justice, these constraints do not necessarily apply to the whole of morality.
CIN wrote: ↑March 22nd, 2022, 7:46 pm
Leontiskos wrote: ↑March 19th, 2022, 1:31 pmIt is interesting to ask whether your consequentialism should be considered a moral system at all--whether it involves real normativity of any kind. This is especially true if I am right in my supposition that it is coming from a place of materialism/determinism/moral skepticism. Even if we could measure utils in principle, why should anyone agree with you that the collective is the organism, or that the util you have identified should be maximized?
As I've indicated above, I don’t in fact hold that the collective is a moral end.
But then why would you kill the one to save the five?
For the same reason that you would give food to five thousand rather than to just one; because it benefits more sentient beings and creates more net pleasantness. The concept of a collective is irrelevant. Were the five thousand that Jesus fed a collective?
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm
CIN wrote: ↑March 22nd, 2022, 7:46 pmAs regards materialism and determinism, I am sceptical as to whether there is free will in the sense that moral responsibility requires (i.e. that in any given situation we could have done differently), but I continue to talk as if humans had moral responsibility because to keep qualifying my every statement with ‘of course this may all be a mere metaphysical fantasy’ would become tiresome.
Well, your consequentialism strikes me as being rather compatible with the denial of moral responsibility, in a way that classical moral systems are not.
I think any moral system has to be centrally concerned with 'oughts', and since 'ought' implies 'can', a denial of moral responsibility undermines my consequentialism just as much as any other moral system. I keep my thinking about morality and freewill in separate compartments; when thinking about morality I assume that we have freewill, otherwise there is no point thinking about morality at all; when I think about freewill I ignore the fact that my belief that we have no freewill destroys morality except as a metaphysical fantasy. In reality, one or other of these has to go, but I pretend that this isn't the case in order to be able to talk about both of them.
CIN wrote: ↑March 22nd, 2022, 7:46 pmIn theory, utils should be maximised because to the extent that we don’t do that,we incur a debt to those individuals who would have experienced them had they been maximised.
So you think that individuals deserve, in justice, to have their pleasure maximized? And if someone fails to maximize another's pleasure (or the collective pleasure) then that person would incur a debt? This is a strange, seemingly controversial claim, apparently much more controversial than my claim that innocents should not be harmed.
The logic of hedonistic consequentialism is inescapable: to the extent that each of us has the ability to push our fellow sentient beings farther up the stair that leads from the basement of misery to the attic of happiness but fail to do so, we are to blame. I'm as guilty as anyone in this regard. Well, no, not anyone. I'm a better human being than Vladimir Putin, I think. But that's not saying much.
CIN wrote: ↑March 22nd, 2022, 7:46 pmLeontiskos wrote: ↑March 19th, 2022, 1:31 pm The more threatening and powerful Hitler becomes, the more plausible is his claim that Jews must be exterminated, at least for the consequentialist. This is because insofar as Hitler is powerful, the consequences of denying his claim become bad indeed.
I don’t think I agree with this. The more powerful Hitler becomes, the more people he will kill, and the more will have to be killed to stop him. You have a graph with two rising curves, and the right thing to do, for the consequentialist, is to stop him as soon as you can, to keep the numbers killed as low as possible.
I can't help but wonder if you are here reintroducing classical moral reasoning in an ad hoc way. Everything will depend on how powerful Hitler is perceived to be. For example, if Hitler is strong and the war looks unwinnable, then clearly the consequentialist should surrender to Hitler and let the Jews die.
Well, okay, let's go along with this. Hitler is bound to win. The world population in 1939 is 2300 million. If we surrender, then once the 6 million Jews are out of the way, the other 2294 million people in the world are going to have reasonably pleasant lives. But if we fight on, 70 million, let's say, will die, not just 6 million. So these are the guaranteed outcomes, and it's a clear binary choice - 2294 million reasonably happy people and 6 million dead, or 2230 million reasonably happy people and 70 million dead. As a hedonistic consequentialist, I say we should surrender and accept the loss of the 6 million, as the lesser of two evils. What do you, as a non-consequentialist, think we should do, and why?
CIN wrote: ↑March 22nd, 2022, 7:46 pmLeontiskos wrote: ↑March 19th, 2022, 12:10 am "Good" is a concept that bridges sensate and intellectual objects, and your definition falls on the intellectual side.
I don’t see this. If, as I hold, ‘good’ means ‘merits a positive attitude’, then surely there are attitudes appropriate to sensate objects just as there are for intellectual objects. In the case of the ice cream, an appropriate attitude is appreciation of the sensory experience of eating the ice cream – of its taste and texture.
I don't deny that such an attitude is appropriate and merited by the ice cream. What I deny is that when we say, "This ice cream is good," we are talking about how one ought to relate to the ice cream.
I deny that too. Saying that something merits a positive attitude is evaluative, but attributing a value to something is not the same as prescribing any attitude or action relating to it. We are saying that the ice cream merits a positive attitude, but we are not making the further assertion that it is incumbent on anyone to adopt that attitude. (In any case, 'this ice cream is good' is always false, because it is not the ice cream that merits the positive attitude, it's the pleasant experience I'm having while eating it. Everyone always gets this wrong, but what can you do?) We are only talking about the ice cream; we are not talking about any actual person's relation to it.
In that case we are saying something much more immanent and 'sensate', "I am enjoying this ice cream," "This ice cream delights me," "This ice cream brings me pleasure." I suppose, riffing on your pleasure-end, we would say that pleasure is a positive attitude/experience, not that pleasure merits a positive attitude.
But you are now conflating three distinct things - pleasure, attitude, and experience. Pleasure is neither an attitude nor an experience, it is a property of an experience, which is why we talk of the pleasure of eating ice cream, the pleasure of listening to Bach, etc., and which is why I prefer to call it pleasantness; and an attitude is neither pleasure not an experience.
CIN wrote: ↑March 22nd, 2022, 7:46 pmLeontiskos wrote: ↑March 19th, 2022, 12:10 amSimilarly, "good" is a concept that bridges descriptive and normative judgments, and your definition falls on the normative side, because the claim that something merits a positive attitude is the claim that someone should respond to it with a positive attitude.
I think the current orthodox view is that ‘good’ is a ‘thin’ evaluative term without descriptive content, rather than a ‘thick’ term such as ‘courageous’ or ‘generous’, which both evaluates and describes (https://iep.utm.edu/thick-co/). This sounds right to me. If you can think of an example of ‘good’ being used not merely to evaluate but also to describe, perhaps you could post it, together with an explanation of what you consider the descriptive content to be.
This sort of contemporary philosophy strikes me as an inheritance from Hume, and I doubt I will agree with much of it. I see the difference between 'good' and 'courageous' as a matter of degree and abstraction, but not kind. As an example, a civil engineer might go around the country inspecting bridges for possible repairs. He may well call the bridges that require no repairs "good bridges." It seems to me that such a use is descriptive (as well as evaluative). To give a parallel, an army recruiter might go around the country searching for courageous men and women, and he would be wielding that quality in much the same way that the civil engineer wields 'good'.
On reflection, I am going to distance myself from the thin/thick distinction, but in the opposite direction from you. I now think that 'courageous' is not evaluative, but merely descriptive; any evaluative content it may seem to have is the result of a shared background assumption by the speaker and his audience that courage is good, an assumption which does not find its way into the words uttered.
That is the problem with your examples: both the civil engineer and the army recruiter are making incomplete statements which are completed by unstated assumptions they share with their audience. When the civil engineer says 'this is a good bridge' to another civil engineer, they share an idea of the properties a bridge must have for them both to call it 'good'; it is that unstated idea that has the descriptive content, not the phrase 'good bridges'. 'Good bridges' tells you nothing about the bridges except that they merit a positive attitude (which is always false, for the same reason as the ice cream; bridges are just lumps of concrete or stone, which don't in themselves merit any kind of attitude). And when the army recruiter looks for courageous men and women, he does so without stating his assumption that courage is a good thing for people in the army to have, so in his case, the evaluative content is in that unstated assumption, not in any use of the word 'courageous'.
I would just want to clarify that when you say something is intrinsically evil, what you mean is that that thing represents a net negative when taken in isolation?
I mean that it merits a negative attitude because of what it is in itself, not because it is instrumental in bringing about something else that merits a negative attitude. I think that comes to the same thing as what you are saying.
CIN wrote: ↑March 22nd, 2022, 7:46 pmLeontiskos wrote: ↑March 17th, 2022, 10:54 pmIf we follow Thomas Aquinas we would say that every act is subjectively considered to be good by the actor. In consequentialist terms we would say that the agent, in the moment of acting, believes their act will produce more good consequences than bad consequences, and that this is a necessary condition for acting.
I don’t see why people should not sometimes deliberately do things they think will have more bad consequences than good. In fact I should think it’s pretty common, because people often give their own interests far greater weight than the interests of others. Anyone who beats an animal must be aware that the animal’s pain outweighs their own pleasure, but they don’t care because at that moment the only being they care about is themselves.
But you've more or less conceded my point when you say, "...because people often give their own interests far greater weight than the interests of others." If their weighting were correct their act would be good.
It would; but I'm suggesting that people can sometimes choose evil knowing it's evil, because they want to do evil, as Satan does in Paradise Lost:
"Farewell remorse! All good to me is lost;
Evil, be thou my Good: by thee at least
Divided empire with Heaven’s King I hold."
Satan knows the difference between good and evil, but chooses evil anyway.
CIN wrote: ↑March 22nd, 2022, 7:46 pmI think the objective reality is that many actions do produce actual un/pleasantness, and I think this is enough to refute the charge of subjectivity.
[Objective fault]
In which case you would be required to say that the individual has weighted their own interests in an objectively incorrect way. In order to say that, you would have to be able to identify--at least vaguely--the correct way as well as the discrepancy. Anticipating some further argument, this sort of objectivity is going to require a number of moral axioms, and I don't see how a number of those axioms could ever be self-supported by consequentialism. For example, the equality principle whereby one person's interests must be weighted equally to another person's interests. It isn't clear to me how consequentialism in itself could ever hope to justify that principle.
I agree, it couldn't. Since consequentialism is merely the view that an action's rightness is determined by its consequences, it will always have to be supplemented by some view as to what those consequences should be. So yes, there have to be further axiom(s).
CIN wrote: ↑March 22nd, 2022, 7:46 pm
I’m a hedonist and a consequentialist, but I don’t think I’m a utilitarian, for the following reason:
[...]
My current view is that the fact that there are sentient beings capable of experiencing un/pleasantness gives rise to two consequentialist moral principles not one: the principle that we should aim to maximise pleasantness and minimise unpleasantness where possible, and the principle that we should aim to distribute pleasantness and unpleasantness equally. Conflicts can arise between these two principles, and at present I am not aware of any way of resolving the conflicts other than making a subjective choice.
Well, you're already beginning to answer my question about the equality principle, so that's good. Two points:
CIN wrote: ↑March 22nd, 2022, 7:46 pmAn agent could face a choice between two actions, A and B; A will minimise total pain but the distribution will be unfair, B will result in more total pain but the distribution will be fair. Which is the right choice? The traditional utilitarian would say A, but I think this is not always the case.
Suppose A results in 50 units of pain for both Fred and Bill, whereas B results in 95 units of pain for Fred and no pain at all for Bill. Which should the agent choose, A or B? A totalist utilitarian would say B, because 95 is less than twice 50; an averagist utilitarian would agree, since an average of 47.5 is less than an average of 50.
I think both are wrong, by which I mean that is it not clear, as they think, that B is the moral choice. By focusing only on total or average pain, utilitarians ignore the fact that 2n units of pain experienced by one person is not the same as n units of pain experienced by each of two people.
This seems to merely be the claim that the units are unequal, or that they have not been measured correctly, or that pain tolerance is not linear. None of this strikes me as a substantial critique of utilitarianism.
No, I'm assuming that the units are equal and have been measured correctly. I'm saying that in asserting that it does not matter to whom the units are given, utilitarians are ignoring the possibility that giving n units of pain to each of two people may be morally better or worse than giving 2n units of pain to one person.
I think you need to ferret out this measurement problem before considering the principle of equality, because they are two different problems.
I don't know about 'before', but I agree that they are separate.
CIN wrote: ↑March 22nd, 2022, 7:46 pmMy current view is that the fact that there are sentient beings capable of experiencing un/pleasantness gives rise to two consequentialist moral principles not one: the principle that we should aim to maximise pleasantness and minimise unpleasantness where possible, and the principle that we should aim to distribute pleasantness and unpleasantness equally.
Why think that your second principle is any more consequentialist than it is utilitarian? It strikes me as an egalitarian principle that is altogether separate from consequentialism, and it is just the sort of thing that would be required to establish objective fault, which I referred to above.
I agree that it isn't a consequentialist principle. I shouldn't have said that it was.
Leontiskos wrote: ↑April 3rd, 2022, 8:39 pm
CIN wrote: ↑April 3rd, 2022, 7:39 pm
Good_Egg wrote: ↑April 3rd, 2022, 5:10 pmWhat we mean by a rule of thumb is a simple and not-too-unsatisfactory approximation to a complex right answer. If there isn't a knowable right answer it's hard to see how one can approximate to it.
An obvious counter-example to your thesis here is pi. Since pi does not have a finite number of decimal places, the answer to the question 'what is the numerical value of pi?' is not knowable, yet we have various rules of thumb (e.g. 22/7, 3.14159) which approximate with varying degrees of closeness to pi, depending on how many decimal places you actually need.
The precise target that pi approximates is not a numerical quantity, it is a ratio, namely the ratio of a circle's circumference to its diameter. Pi is the numerical quantity that approximates this ratio.
Well, no, pi is not an approximation to the ratio, pi
is the ratio, though it is equivalently expressed as a number by taking the diameter to be 1 and then not bothering to mention it:
"The number π (/paɪ/; spelled out as "pi") is a mathematical constant, approximately equal to 3.14159. It is defined in Euclidean geometry as the
ratio of a circle's circumference to its diameter... As an irrational
number, π cannot be expressed as a common fraction, although fractions such as 22/7
are commonly used to approximate it." (
https://en.wikipedia.org/wiki/Pi)
And the point here is that the ratio is unknowable even in principle:
"We have known since the 18th century that
we will never be able to calculate all the digits of pi because it is an irrational number, one that continues forever without any repeating pattern." (
https://www.ncl.ac.uk/press/articles/ar ... enpatterns)
Thus when we use 22/7 or 3.1459 or somesuch as an approximation for the ratio of circumference to diameter, we are using a rule of thumb to approximate something unknowable in principle, which makes it a counter-example to Good Egg's thesis.
Philosophy is a waste of time. But then, so is most of life.