GE Morton wrote: ↑August 19th, 2021, 7:45 pm
Leontiskos wrote: ↑August 18th, 2021, 8:56 pm
I think your moral proposal is plausible and coherent, and it strikes me as a form of utilitarianism.
As long as that term is understood loosely. Any rational moral theory must be consequentialist in the long run.
Do you take deontology to be irrational?
But I think the system is also utilitarian in the sense that it is based on maximizing quantitative units, which are defined in terms of subjectively-defined well-being. That is, the goals of each participant are collected, quantified, and analyzed until the system arrives at some maximal solution. Obviously the rules of that algorithm will have to be logically prior to the goals the algorithm analyzes. The particular rules that the algorithm uses will determine the form of utilitarianism.
GE Morton wrote: ↑August 19th, 2021, 7:45 pm
Leontiskos wrote: ↑August 18th, 2021, 8:56 pmAs I understand it you have constructed a system where each participant's subjectively-defined welfare is 'equally' advanced, and the system is available to anyone who wishes to opt in. It is quasi-objective in the sense that the desires of each participant are publicly known and are the very thing that constitute the system itself.
Not (necessarily) equally advanced. The extent to which any individual's interests or welfare is advanced will depend upon many variable factors, particularly his own talents, strengths, diligence, and ambition, as well as "dumb luck." The only thing equal (per theory) is that the same rules apply to all agents.
Okay.
GE Morton wrote: ↑August 19th, 2021, 7:45 pmLeontiskos wrote: ↑August 18th, 2021, 8:56 pmGE Morton wrote: ↑January 25th, 2020, 7:03 pmThough what each person counts as a good or evil is subjective, that they do consider various things as goods or evils is objective. So a goal to the effect, "Develop principles and rules of interaction which will allow all agents to maximize welfare as each defines it" is a morally neutral goal; it is universal, it assumes no values and begs no moral questions.
I do not agree that this systematic goal is morally neutral. By my definition something is moral if it presupposes a normative state of affairs for human thoughts, judgments, or behavior. But your principle implies that human thoughts, judgments, and behavior ought to be oriented towards this particular goal. It is a very democratic and morally thin goal, but it is moral all the same. If you want to maintain that the goal is morally neutral could you give your definition of moral neutrality?
" . . . your principle implies that human thoughts, judgments, and behavior ought to be oriented towards this particular goal."
No, it does not. It does not recommend any particular goal. It does, however, assume that all moral systems have some goal, some raison d'etre (which means that moral "oughts" are instrumental "oughts," i.e., rules or practices which objectively advance or facilitate that goal). It then assumes that the overriding goal, purpose, of most moral theories over the centuries has been to advance human welfare, per some understanding of that term (even religious moralities, by assuring adherents of an "afterlife").
Isn't the goal then welfare, maximized in some particular way? Below you affirm that, "The theory does, to be sure, propose and assume a particular conception of human welfare."
To be clear, are you attempting to propose a novel "moral" system or is this a rhetorical way of describing the factual state of human society as it finds itself?
GE Morton wrote: ↑August 19th, 2021, 7:45 pmThe theory does not, however, assert any obligation to adopt that goal, any more than the rules of baseball oblige anyone to play that game. As I've said on several occasions, someone who does not share that goal --- e.g., egoists, elitists, amoralists, et al, will have no use for my moral theory. Their rejection of it doesn't render it unsound, however.
Oh, I didn't mean to imply that you were foisting this goal upon people unawares. That is why I have been using the term "participant," which is meant to refer to those who decide to participate in your system.
Why do you believe the system is sound? Why do you think people should participate?
GE Morton wrote: ↑August 19th, 2021, 7:45 pm
Leontiskos wrote: ↑August 18th, 2021, 8:56 pmGE Morton wrote: ↑January 25th, 2020, 7:03 pmThe goal of a theory, however, is not a personal goal; it does not assert any particular interest of any particular person. It is indifferent to personal goals. But it does require a consensus among everyone interested in a viable theory of the subject matter in question. There is, I think, a consensus that the aim of ethics is to secure and advance "the good," or "the good life," in some sense. If there is, and if we agree that what constitutes "the good" or "the good life" differs from person to person, then the goal stated above becomes "quasi-objective."
Along the same lines as my last point, there are some personal goals that are common to all participants, and these would be goals such as, "To have a working system," "To have equal representation," "To achieve a particular conception of maximized welfare," etc. Those are the only personal goals presupposed by your theory, which is beginning to look a lot like classical liberalism.
I'm not sure who you're counting as a "participant." Certainly everyone who has faced any moral dilemma or given any thought to moral theory will share the goal of devising a satisfying and workable morality. But that is obviously not true of everyone in any modern society. Nor, surely, is a desire to "achieve a particular conception of maximized welfare." The theory does, to be sure, propose and assume a particular conception of human welfare. Whether that conception is sound and rationally defensible is a separate, non-moral, question.
Can I ask why you define moral 'oughts' in instrumental terms? Is it just because you think categorical 'oughts' don't exist, and so every goal of human action must be subjective, leaving the means as the only possible "objectively moral" candidate? The goals seem moral in the common sense of the word, so it strikes me as odd to exclude them from being called moral. I don't find anything in the definitions or etymologies of 'moral' that would restrict it to an instrumental concept.
GE Morton wrote: ↑August 19th, 2021, 7:45 pm
Leontiskos wrote: ↑August 18th, 2021, 8:56 pmI also don't see how the Equal Agency postulate could fail to be a moral principle. That everyone ought to be treated the same is a moral principle analogous to the Equal Protection clause of the 14th Amendment.
The theory defines "Moral Agent." The Equal Agency postulate follows from that definition, i.e., the rules apply in the same way to anyone who qualifies as a moral agent, there being no basis in the theory for applying them differently to different agents. The postulate has moral import, of course, but it is not per se a moral imperative; it is a logical one.
When you say it is logical rather than moral, what do you mean? Given your definition above it would seem to be moral insofar as it "objectively advances or facilities" the overriding goal of your system. Or are you here using 'moral' in a more casual sense? Even so, I'm curious what makes it logical rather than moral, both because everyone imagines that their morality is logical, and because any moral norm could presumably be expressed in the language of logic. I come from an Aristotelian background where virtuous action is tied up with man's rational nature and thus all moral acts are related to rationality.
GE Morton wrote: ↑August 19th, 2021, 7:45 pmBTW, the Equal Agency postulate does not entail that "everyone ought to be treated the same." It only entails that the principles and rules of the theory apply equally to all. How people "treat" one another covers far more ground. It is (as you suggested) similar to the legal concept of "equal protection of the law" --- e.g., everyone, white, black, male, female, old, young, rich, poor, etc. --- who runs a red light pays the same fine. But it doesn't require you to invite all of your neighbors to your backyard BBQ.
Right, I understand that.
My initial criticism of your system is that it seems too vague to do any meaningful work. Human beings are liable to define 'welfare' in dramatically opposed ways--indeed, this is one of your basic premises. Only on the presupposition that their definitions strongly converge could meaningful principles and rules of interaction be implemented. In that case you would arrive at a democratically derived common good. A well-worn definition of law would be something like, "An ordinance for the sake of the common good" (Cf.
Wikipedia: Common Good). Do you think a democratic legislature with no constitution or charter would produce your overriding goal in its laws?
-Leontiskos