Leontiskos wrote: ↑August 18th, 2021, 8:56 pm
I think your moral proposal is plausible and coherent, and it strikes me as a form of utilitarianism.
As long as that term is understood loosely. Any rational moral theory must be consequentialist in the long run.
As I understand it you have constructed a system where each participant's subjectively-defined welfare is 'equally' advanced, and the system is available to anyone who wishes to opt in. It is quasi-objective in the sense that the desires of each participant are publicly known and are the very thing that constitute the system itself.
Not (necessarily) equally advanced. The extent to which any individual's interests or welfare is advanced will depend upon many variable factors, particularly his own talents, strengths, diligence, and ambition, as well as "dumb luck." The only thing equal (per theory) is that the same rules apply to all agents.
GE Morton wrote: ↑January 25th, 2020, 7:03 pmThough what each person counts as a good or evil is subjective, that they do consider various things as goods or evils is objective. So a goal to the effect, "Develop principles and rules of interaction which will allow all agents to maximize welfare as each defines it" is a morally neutral goal; it is universal, it assumes no values and begs no moral questions.
I do not agree that this systematic goal is morally neutral. By my definition something is moral if it presupposes a normative state of affairs for human thoughts, judgments, or behavior. But your principle implies that human thoughts, judgments, and behavior ought to be oriented towards this particular goal. It is a very democratic and morally thin goal, but it is moral all the same. If you want to maintain that the goal is morally neutral could you give your definition of moral neutrality?
" . . . your principle implies that human thoughts, judgments, and behavior ought to be oriented towards this particular goal."
No, it does not. It does not recommend any particular goal. It does, however, assume that all moral systems have some goal, some
raison d'etre (which means that moral "oughts" are instrumental "oughts," i.e., rules or practices which objectively advance or facilitate that goal). It then assumes that the overriding goal, purpose, of most moral theories over the centuries has been to advance human welfare, per some understanding of that term (even religious moralities, by assuring adherents of an "afterlife").
The theory does not, however, assert any obligation to adopt that goal, any more than the rules of baseball oblige anyone to play that game. As I've said on several occasions, someone who does not share that goal --- e.g., egoists, elitists, amoralists,
et al, will have no use for my moral theory. Their rejection of it doesn't render it unsound, however.
GE Morton wrote: ↑January 25th, 2020, 7:03 pmThe goal of a theory, however, is not a personal goal; it does not assert any particular interest of any particular person. It is indifferent to personal goals. But it does require a consensus among everyone interested in a viable theory of the subject matter in question. There is, I think, a consensus that the aim of ethics is to secure and advance "the good," or "the good life," in some sense. If there is, and if we agree that what constitutes "the good" or "the good life" differs from person to person, then the goal stated above becomes "quasi-objective."
Along the same lines as my last point, there are some personal goals that are common to all participants, and these would be goals such as, "To have a working system," "To have equal representation," "To achieve a particular conception of maximized welfare," etc. Those are the only personal goals presupposed by your theory, which is beginning to look a lot like classical liberalism.
I'm not sure who you're counting as a "participant." Certainly everyone who has faced any moral dilemma or given any thought to moral theory will share the goal of devising a satisfying and workable morality. But that is obviously not true of everyone in any modern society. Nor, surely, is a desire to "achieve a particular conception of maximized welfare." The theory does, to be sure, propose and assume a particular conception of human welfare. Whether that conception is sound and rationally defensible is a separate, non-moral, question.
I also don't see how the Equal Agency postulate could fail to be a moral principle. That everyone ought to be treated the same is a moral principle analogous to the Equal Protection clause of the 14th Amendment.
The theory defines "Moral Agent." The Equal Agency postulate follows from that definition, i.e., the rules apply in the same way to anyone who qualifies as a moral agent, there being no basis in the theory for applying them differently to different agents. The postulate has moral import, of course, but it is not
per se a moral imperative; it is a logical one.
BTW, the Equal Agency postulate does not entail that "everyone ought to be treated the same." It only entails that the principles and rules of the theory apply equally to all. How people "treat" one another covers far more ground. It is (as you suggested) similar to the legal concept of "equal protection of the law" --- e.g., everyone, white, black, male, female, old, young, rich, poor, etc. --- who runs a red light pays the same fine. But it doesn't require you to invite all of your neighbors to your backyard BBQ.