Leontiskos wrote: ↑March 19th, 2022, 1:31 pm
CIN wrote: ↑March 19th, 2022, 10:34 amIf taking a life has a moral weight or value distinct from its consequential value, by virtue of what does it have that extra value?
Apparently we have two different foundational conceptions of human society. On your conception human society is like a bee hive, where the organism is found in the collective and the individuals are merely accidental parts of an organic whole. On this conception just as I might amputate my foot in order to avoid a deadly infection, we might kill members of society in order to avoid dangers to the societal organism. Or just as I might graft skin from one part of my body to another, we might kill individual parts of the organism in order to shore up another, more important, part of the organism (say, by redistributing healthy bodily organs).
If I have given the impression that I think the collective can have interests of its own and therefore be a moral end deserving our moral concern, then I have clearly been doing my job here rather badly. I hold that the only objects that can be moral ends are sentient individuals capable of finding their experiences pleasant or unpleasant (hereafter to be typed un/pleasant, because I am fed up of typing these long words). This rules out as moral ends not only collectives, but also inanimate objects, robots that are conscious but lack the ability to find their experiences un/pleasant, and pre-sentient foetuses (which means I am pro-choice before the foetus becomes sentient, and pro-life afterwards).
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm
On my conception the individual human person is the organism, and the societal whole is an accidental collection of organisms/persons.
Agreed, though I don’t much like the word ‘organism’, it’s a metaphor, and I think metaphors in philosophical discussions can tend to obfuscate rather than clarify.
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm
From this comes the classical conception of morality as a set of rules that govern the interactions between autonomous organisms/persons.
Well and good, but then the problem is that since there are an indefinitely large number of possible rules, there has to be a reason to choose some and not others. Your next few sentences begin to address this problem, but I'm not happy with them.
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm One of the most basic rules, then, is that an innocent person cannot be harmed, and anyone who does harm an innocent person incurs a debt (both to that person and to the societal order which they have disrupted). The reason "You shall not harm the innocent" is a hard and fast rule of morality is because it is presupposed by and necessarily linked with the entire moral paradigm. To undo this rule would be to undo and undermine all of morality. We could call it a first principle of moral reasoning.
I agree that a wrong action incurs a debt. This is where the idea of moral obligation comes from: ‘ought’ is cognate with ‘owe’. Oddly, we now locate the debt immediately prior to the wrong action, and say ‘you ought not to do X’, when in fact it is doing X that incurs the debt; ‘ought’ has become a warning that we will owe if we perform the action. I don’t agree that there is any debt to the societal order, for the same reason that society cannot have interests; the debt is to the individuals that make up that society.
However, I don’t see where innocence comes in. You seem to have introduced that idea without preamble or any kind of supporting argument. You say that “You shall not harm the innocent” is ‘presupposed by and necessarily linked with the entire moral paradigm’, but I fail to see why this should be the case. Perhaps you can expand on this.
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm It is interesting to ask whether your consequentialism should be considered a moral system at all--whether it involves real normativity of any kind. This is especially true if I am right in my supposition that it is coming from a place of materialism/determinism/moral skepticism. Even if we could measure utils in principle, why should anyone agree with you that the collective is the organism, or that the util you have identified should be maximized?
As I've indicated above, I don’t in fact hold that the collective is a moral end.
As regards materialism and determinism, I am sceptical as to whether there is free will in the sense that moral responsibility requires (i.e. that in any given situation we could have done differently), but I continue to talk as if humans had moral responsibility because to keep qualifying my every statement with ‘of course this may all be a mere metaphysical fantasy’ would become tiresome.
In theory, utils should be maximised because to the extent that we don’t do that,we incur a debt to those individuals who would have experienced them had they been maximised. However, since I also hold that distribution should be equitable, there is the possibility of conflict between two principles (see further below).
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm After all, consequences are within human control, and this is precisely what makes consequentialism so enticing.
Really? How can they be within human control when they are so often beyond our ability to predict? And for me, what makes consequentialism enticing is simply that I believe it to be true.
Leontiskos wrote: ↑March 19th, 2022, 1:31 pm The more threatening and powerful Hitler becomes, the more plausible is his claim that Jews must be exterminated, at least for the consequentialist. This is because insofar as Hitler is powerful, the consequences of denying his claim become bad indeed.
I don’t think I agree with this. The more powerful Hitler becomes, the more people he will kill, and the more will have to be killed to stop him. You have a graph with two rising curves, and the right thing to do, for the consequentialist, is to stop him as soon as you can, to keep the numbers killed as low as possible.
Leontiskos wrote: ↑March 19th, 2022, 1:31 pmSo I think you need to either abandon your arguments from moral skepticism and accept your own burden of proof for normativity, or else bite the bullet and forfeit any real claim on normativity or morality.
I’m only sceptical about our ability to follow the theoretical prescriptions of hedonistic consequentialism in practice. I don’t think that disqualifies the theory from being normative, but perhaps you disagree.
Leontiskos wrote: ↑March 19th, 2022, 10:12 pm
What bothers me about hedonism isn't the "fact-value gap." It is the reduction of good to pleasure. The idea that pleasure is good seems uncontroversial to me.
Well, I’m not actually reducing good to pleasure, I’m identifying pleasantness as the only intrinsic good. But perhaps that doesn’t help you.
Leontiskos wrote: ↑March 19th, 2022, 12:10 am "Good" is a concept that bridges sensate and intellectual objects, and your definition falls on the intellectual side.
I don’t see this. If, as I hold, ‘good’ means ‘merits a positive attitude’, then surely there are attitudes appropriate to sensate objects just as there are for intellectual objects. In the case of the ice cream, an appropriate attitude is appreciation of the sensory experience of eating the ice cream – of its taste and texture.
Leontiskos wrote: ↑March 19th, 2022, 12:10 amSimilarly, "good" is a concept that bridges descriptive and normative judgments, and your definition falls on the normative side, because the claim that something merits a positive attitude is the claim that someone should respond to it with a positive attitude.
I think the current orthodox view is that ‘good’ is a ‘thin’ evaluative term without descriptive content, rather than a ‘thick’ term such as ‘courageous’ or ‘generous’, which both evaluates and describes (
https://iep.utm.edu/thick-co/). This sounds right to me. If you can think of an example of ‘good’ being used not merely to evaluate but also to describe, perhaps you could post it, together with an explanation of what you consider the descriptive content to be.
Leontiskos wrote: ↑March 19th, 2022, 12:10 am
CIN wrote: ↑March 19th, 2022, 4:37 pmI think deontological prohibitions, which may seem on the surface to be absolute, are often disguised consequentialist rules. Why do we think it so terrible to convict someone of a crime they did not commit? Because if we convict them, we send them to jail (or even, in some countries, execute them). If all we ever did was write down ‘Fred is guilty of murder’ in a ledger somewhere and take no further action, it would be no worse than the fact that some inattentive official got my brother-in-law’s name wrong on his death certificate (yes, really).
Let's suppose you know Fred is innocent and yet you accuse him of murder. This leads to his conviction, his imprisonment, and his execution. Now the consequentialist is committed to the claim that at each step nothing per se evil is occurring, but only something that will likely lead to bad consequences.
No, because his imprisonment is unpleasant for him, and his execution terminates his pleasant existence prematurely, and so both are intrinsically evil.
.
Leontiskos wrote: ↑March 17th, 2022, 10:54 pm Is anything morally wrong on your view? I'm still not sure whether you're positing a moral skepticism/descriptivism, or whether you really are committed to a normative moral theory.
I certainly intend my theory to be normative. If a state of affairs merits a negative attitude, then it is bad, and if it’s bad, we ought not to bring it about. The difficulties are:
a) because the effects of actions widen out into the future like ripples on a pond and become unpredictable, we often cannot discover in practice whether the consequences of an action will be good or bad, and
b) there can be apparently unresolvable conflicts between the imperative to maximise net pleasantness and the imperative to distribute un/pleasantess fairly (see further below).
Leontiskos wrote: ↑March 17th, 2022, 10:54 pmIf we follow Thomas Aquinas we would say that every act is subjectively considered to be good by the actor. In consequentialist terms we would say that the agent, in the moment of acting, believes their act will produce more good consequences than bad consequences, and that this is a necessary condition for acting.
I don’t see why people should not sometimes deliberately do things they think will have more bad consequences than good. In fact I should think it’s pretty common, because people often give their own interests far greater weight than the interests of others. Anyone who beats an animal must be aware that the animal’s pain outweighs their own pleasure, but they don’t care because at that moment the only being they care about is themselves.
Leontiskos wrote: ↑March 17th, 2022, 10:54 pmIf that psychological account is correct then if no external objective reality can be brought to bear on the individual's subjective valuations, no moral evil can exist. Since consequentialism is so mired in subjectivity I'm not sure how this can be overcome.
I think the objective reality is that many actions do produce actual un/pleasantness, and I think this is enough to refute the charge of subjectivity.
Leontiskos wrote: ↑March 17th, 2022, 10:54 pm
CIN wrote: ↑March 19th, 2022, 4:37 pm I’m not really a utilitarian anyway, because as I’ve said to Belindi, I believe that it’s a mistake to think that it makes no moral difference how pleasantness and unpleasantness is distributed; distributing it unevenly (without good reason) amounts to treating some beings as more valuable ends than others (without good reason), and that isn’t rational.
I have some memory of your response to Belindi, but I cannot find it. In any case, I don't follow your reasoning here. Presumably the utilitarian reason for an equal or unequal distribution is precisely more aggregate happiness (or pleasantness). I don't know of any utilitarians who would propose to distribute it unevenly apart from that justification.
I’m a hedonist and a consequentialist, but I don’t think I’m a utilitarian, for the following reason:
An agent could face a choice between two actions, A and B; A will minimise total pain but the distribution will be unfair, B will result in more total pain but the distribution will be fair. Which is the right choice? The traditional utilitarian would say A, but I think this is not always the case.
Suppose A results in 50 units of pain for both Fred and Bill, whereas B results in 95 units of pain for Fred and no pain at all for Bill. Which should the agent choose, A or B? A totalist utilitarian would say B, because 95 is less than twice 50; an averagist utilitarian would agree, since an average of 47.5 is less than an average of 50.
I think both are wrong, by which I mean that is it not clear, as they think, that B is the moral choice. By focusing only on total or average pain, utilitarians ignore the fact that 2n units of pain experienced by one person is not the same as n units of pain experienced by each of two people. They are not entitled to ignore this fact, because in so doing, they ignore the morally relevant fact that Fred and Bill count equally as moral ends. My current view is that the fact that there are sentient beings capable of experiencing un/pleasantness gives rise to two consequentialist moral principles not one: the principle that we should aim to maximise pleasantness and minimise unpleasantness where possible, and the principle that we should aim to distribute pleasantness and unpleasantness equally. Conflicts can arise between these two principles, and at present I am not aware of any way of resolving the conflicts other than making a subjective choice.