Leontiskos wrote: ↑June 24th, 2022, 4:16 pm
Astro Cat wrote: ↑June 24th, 2022, 1:15 amI think regardless of the breadth (whether I believe "I ought not to eat meat" or "Nobody ought to eat meat"), the same sort of thing is going on, so I didn't think it was important to focus on that difference.
What I think is going on is that people have values from which they build oughts out of hypothetical imperatives: if I value x, then I ought to do y. In the case of breadth, the "y" that we do just includes "I ought to tell that guy over there to stop doing that." I think that we're only in charge of our own oughts, but the oughts we can come up with can include feeling like we ought to get others to go with our point of view (I can object to murder not because there's something about the universe that says murder is wrong, but because I value life, and since I value life, I ought to try to preserve it, and that includes telling someone else to knock off that murdering stuff they're trying to do; or putting them in prison so they can't do it any more!)
So we might say that, "If I value X, and someone is harming X, then I should impede them." There are two different ways someone can be impeded: coercion or persuasion. You seem to have drawn the conclusion that the moralist is irrational and unpersuasive, and therefore must resort to some form of coercion. That is, since there are no sound arguments for "ought claims," persuasion is not possible. The only exception would be cases where the moralist fortuitously encounters someone who holds the same axiomatic values that he does; but the values themselves are not susceptible to scrutiny or argument (or truth and falsity).
Yes, I think that's the case; but with caveats. I think it is the case that a person needs to hold the same axiomatic values in order to affirm that they ought to comply. I don't think it's hopeless or impossible to persuade, though.
Despite thinking doxastic voluntarism is false, I think that we do go through belief revision in the face of new facts and perspectives (I just don't think we control whether we are persuaded or not consciously). So I think a person can be persuaded over to agree about some moral statement that they might not have before: perhaps because they never thought about it from the perspective you just gave them with your argument before, perhaps because you provided a new fact that entered their value hierarchical calculus upon receiving it that changed the outcome of their moral hypothetical imperatives*, etc.
(* -- I feel like this sentence wasn't clear but I don't want to delete it. I mean that I think we have values, those values are hierarchical as in the example I gave where I value life and property, but value life more, so I might look the other way if a starving person steals bread. What I mean to say is someone might be persuaded by learning a new fact that changes how their value calculus leads to a moral belief. For instance, a person might believe that an electric vehicle is such an overwhelming environmental good that they ought to have one, but suppose they learn a new fact that the production and maintenance of such vehicles damages the environment more: they undergo belief revision. In this case they may still have the same axiomatic values, but they do their moral calculations differently thanks to the new fact they learned. BTW, I don't know whether or not that example fact is true, just giving an example).
Likewise since I think our moral beliefs are effectively
arguments (we
build them from our value axioms), it's possible for us to do that "wrong," inconsistently, inefficiently, etc, and we can learn new perspectives and facts that will persuade us.
I'm sorry if that was long winded. I think people can be persuaded even if moral realism is non-cognitive, but at any point in time they're affirming some moral statement, it's because at that point in time they hold the requisite axiomatic value (whether they were recently converted to having that value, or placing it with a different importance in their hierarchy, or whether they already luckily [as you say] had the value) and not because of some "external" ought.
I feel it necessary to point out that I believe we still can't answer the question, "ought they change their value in the face of new perspectives/evidence/etc.?" Well, that depends on if they value doing that
Leontiskos wrote:
Astro Cat wrote: ↑June 24th, 2022, 1:15 amI think with breadth (when the ought "feels" like it applies to others and not just us), that is where the illusion of moral realism comes from. People feel like the ought is "out there" in the universe, and that the other people are subject to this ought "out there." But really, when we feel moral outrage, we are feeling our own hypothetical imperative to stop them from harming or interfering with what we value. If I value altruism and someone disgustingly rich doesn't even lift a finger to help the less fortunate, I feel outrage because they're harming my value...
To be clear, I am a moral realist and I think much of your analysis of moral realists is mistaken, and I think it is mistaken in ways that are transparent to reason. That is, I think you are likely to eventually agree that some of your own analysis is mistaken. There are some rough areas in your theory. The first is the matter of intent, a second is the "axiomaticity" of value, and a third is this matter of moral outrage.
I'm willing to be wrong if I must (as in, to admit it and change; not to cling onto it). Sometimes it's fun. I don't think I'm there quite yet though. We will see. And I'll readily admit that my lack of formal philosophy training leads to some rough edges, though I have confidence in my ability to do some things right.
Leontiskos wrote:
With regard to the third area and the text from your quote which I bolded, it seems to me that moral outrage is altogether different than the defense of a value. If I value my house and termites invade then I will call Orkin, and if I value my wife and she is diagnosed with cancer, then I will consult an oncologist, but I do not express moral outrage at the termites or the cancerous cells. Moral outrage is rather a response to the culpably bad behavior of an agent who has free will (and is therefore responsible for their behavior).
Ok, I agree. This just means I wasn't careful enough when trying to pin down what I mean the difference between a "moral preference" and something like a color preference. I'm reminded of a story about Diogenes responding to the definition of a human as a "featherless biped" by bringing forth a plucked chicken. It's one of those things where yes, it's important to be as exhaustive as possible with definitions to avoid counterexamples and wrinkles, but sometimes the question becomes "does this wrinkle damage the idea being presented?"
In this case, I think all I have to do is acknowledge that moral outrage
is a response to the culpable behavior of a free agent that could have done otherwise; but point out that this still only exists if we hold a value about that. The vegetarian example works nicely. Vegetarian A may believe in not eating meat for themselves, but they don't mind when their friend orders a steak. Vegetarian B gets morally outraged if their friend orders the steak. I think this is explainable under the non-cognitivist picture as easily as it's explained under the realist picture: in the non-cog picture, Vegetarian B has a value to
enforce another value, whereas Vegetarian A does not.
If I think about it, some people have values about enforcing things like their color preferences, too: while not a color, I think it's the same sort of thing, but I'm reminded of the controversy over changing the name of the Washington Redskins team. A great many people wanted to enforce their preference, and
interestingly, some of those people wanted to enforce a change for moral preference reasons while many on the opposing side seemed to have a preference for what I'd find hard to call moral reasons to keep the name.
Leontiskos wrote:
Let's take hypocrisy rather than altruism, because it is an easier case. Now if hypocrisy is a vice, then the corresponding virtue must be something like integrity. When someone rebukes a hypocrite we might provide an analysis which says that the one rebuking holds a value (integrity); the one being rebuked has harmed or interfered with her value (hypocrisy); and therefore in order to defend her value she must rebuke the hypocrite. That is an interesting analysis, but it feels a bit clumsy to me. It feels clumsy because it gives the impression that some object is valued which must then be defended from those who would harm or dishonor it. But is that really what is happening when we form moral judgments? It seems to me that what is valued is rather some norm of behavior and little else. The hypocrite has not transgressed a valued object; he has acted badly, in a way unbecoming of human beings. We might say that he is "Behaving like a brute," or that, "He should know better."
I do disagree that it's clumsy, because it seems to me that we do check our values in relation to others' behavior. For some on this planet, it's unimaginable to dance or to sing in public, while many people don't have a value regarding that so they don't bat an eye. I think it does get clouded by the fact that valuing the cultural (or subcultural!) norm is a value that a person either has or doesn't in itself.
If this weren't something we were checking against our internal values
every time, then there would be some
universal idea of "behaving like a brute." We have things that are close, like admonitions against murder, but isn't it interesting that the things that are the most nearly universal probably have evolutionary explanations? (I think the reason many of us agree on many values is a combination of evolutionary history/nature and culture/nurture!)
Leontiskos wrote:Whether or not you agree that the value-object approach is clumsy, all of this runs right into the second question of the axiomaticity of value. Presumably if you ask someone why they value X, they will tell you that they value X because it is (objectively) valuable. This moves us back to intent, for it leads us to the idea that the interpretation of the moral realist's locution is at variance with the intent of their locution. To say that the moral realist is merely talking about preferences or axiomatic values when they themselves clearly deny this interpretation is to "put words in their mouth" (or more precisely, "intentions in their minds"). ...And it may be that we are in agreement here, and the only question is where the supposed error of the moral realist lies.
I see! I didn't consider this objection. I suppose it is putting words in their mouth. So let me try to clarify.
I think if someone responds, "because value
X is objectively valuable," they are uttering something non-cognitive. I think they might as well be saying "t'was brillig, and the slithey toves did gyre and gimble in the wabe." So I shouldn't say that I'm denying they think something -- I won't put words in their mouth. This may be a different topic, but I think it's
possible for people to think something non-cognitive has meaning, behave as if it does, when it in fact does not. This is where I think the realist makes a mistake. I don't think they form a real cognizable picture of what they mean when they say "
X is objectively valuable."
For instance, for a long time Frege and Whitehead were perfectly fine under the illusion that it's meaningful to talk about "sets of all sets which do not contain themselves." One could ostensibly have conversations with this phrase and
feel like something cognizable is being expressed. There's an
illusion of cognizability (that's a word now, I've decided), shattered by Russel. I think something like this is happening with moral realists. I won't put words in their mouth, but I think the words coming out of their mouth are non-cognitive, and I think that applies to
them as well, just
unbeknownst to them.
Leontiskos wrote:
You may have intuited by now that I am a moral realist who believes practical knowledge is propositional. I think S1a, the claim about the piece of art, and even S1 are propositional (although whether the speaker intended it to be propositional must be assessed on a case by case basis).
Admission of practical knowledge may come back to haunt you in the PoE thread ;P (but I tease, I have nothing specific in mind, just feels like it could be a thing over there).
Leontiskos wrote:The key for S1 is understanding what is meant by "tasty" and the key for S1a is understanding what is meant by "good". The colloquial object of correspondence for S1 is whether string cheese satisfies, in general, human desires for taste. If I have never encountered string cheese, then when my friend offers me some and tells me it is tasty, I will know exactly what he is claiming. In curiosity I may well go on to ask myself whether his claim is true or false, and I will verify the claim by tasting the cheese. (I will leave it there for now because I feel like I've already written enough or too much. )
What if I made things less ambiguous, if I'm trying to form a subjective example, if I were to make a new S1:
S1: Gouda tastes better than string cheese.
What if I insist that by S1, I don't mean S2: Cat thinks gouda tastes better than string cheese.
Would S1 be non-propositional if we insist it's not a truncated S2?
Leontiskos wrote:Here is an excerpt from earlier in the thread that touches on a similar topic:
...
Again, I don't think these Humean inheritances are helpful or accurate, but courage is good qua military and structurally sound bridges are good qua the definition of bridge. These are not extrinsic considerations, they are built into the nature of a military or the nature of a bridge. And again I would say that good is an abstract concept insofar as one must designate the object of goodness before knowing the precise meaning of goodness in some particular utterance, but there is also a common meaning across objects.
[/quote]
(Emphasis added)
I think I agree. But I'm struck by something here: there seems to be resistance from people to allow S1 to be completely subjective. Some suggest that it must be a truncated S2, some suggest that it becomes propositional when we know who is doing the defining because it satisfies some property of that person, and so on. I am told basically that it makes no or little sense for S1 to be non-propositional. But isn't that my point? That it
does make little sense, but that people do this anyway? This is more of what I was talking about above where I said I think people talk about non-cognizable things all the time without realizing it.