Mohism and Maximizing

In thinking about Mohism lately, I’ve wondered to myself if the fact that maximizing benefit (li 利) is not explicitly a part of Mozi’s doctrines makes a difference. Though it is clear that benefit is supposed to be the primary consideration in deciding what to do or what policies to adopt, there isn’t some explicit accompanying doctrine of producing the overall best state of affairs construed in terms benefit, as there is in classcial utilitarianism for example. This could divide into at least two questions: (A) Can it simply be assumed that if someone thinks beneficial consequences should be the primary motive for acting, or rationale for policy, that she also thinks maximizing such consequences should be the primary motive? (B) If maximizing consequences weren’t part of the Mohist view, would it still count as a form of consequentialism?

Obviously, in part I’m interested (in question B) because it makes a difference for how to categorize Mohism in the spectrum of types of ethical theory. But I’m also interested (via an answer to A) in whether there might plausibly be a systematic, primary role for consequence-reasoning that isn’t committed to some kind of maximizing rationality.

Any thoughts?

38 thoughts on “Mohism and Maximizing

  1. This is a question that has been interesting me too. Once in a seminar T. M. Scanlon proposed an alternative simple formula (not thinking about the Mozi) that he found attractive as a basic decision-procedure: “Avoid known threats.” Offhand I think that fits the Mozi pretty well. One might put it this way: aim to secure a satisfactory state of affairs.

    One could defend such a rule on maximizing foundations, but it isn’t a maximizing rule. But I’m going to back off now for a while and let other people talk.

  2. I don’t recall that it’s in any of his publications, but I haven’t read everything. The remark was an informal one made to a small group. It stuck with me because I think it’s sensible, so far as it goes. Of course it doesn’t go very far by itself, because it doesn’t say what the threats to be avoided are threats to.

  3. Nobody yet? OK, well here’s a small start.

    “(A) Can it simply be assumed that if someone thinks beneficial consequences should be the primary motive for acting, or rationale for policy, that she also thinks maximizing such consequences should be the primary motive?”

    Toward giving a simple answer, I’ll simplify the question by replacing “primary” by “sole” or “sole fundamental,” and I’ll assume that for the moment we’re not worried about self-defeatingness or self-effacingness of motivations or rationales. (I think such non-worry is radically unrealistic though.)

    The simple answer is that if consequences are the sole motivator or justifier, in such a way that options are sometimes of unequal consequential value, then as between any two unequal options one should take the one with the better consequential value, and that’s maximization of consequential value.

    But that’s not the same as maximizing good consequences.

    We can distinguish at least three dimensions or factors of consequential value: reasons why consequential value can admit of differences in degree. One is that outcomes themselves can differ in quantity (there can be more or less happiness). Another is that an action can make an outcome more or less objectively likely (can promote that consequence to a greater or lesser degree). A third is that the previous two sorts of thing can be more or less knowable to the agent in deliberation.

    One might try to define a consequentialist view such that it turns out that there is little difference of degree in consequential value among actions that are consequentially good. If there were just two sizes of fork, then they’d be the big forks and the little forks, and we’d have little use for the complex idea of “maximally big fork”. “Big fork” would hardly even be a “comparative term.” Similarly, if one can define a view that has the effect of sorting all our actions into two groups, such that the actions within each group have equal consequential value, then given that view, the idea of maximizing would be kind of pointless.

    One might think one could largely remove degrees from the first dimension by defining one’s consequentialism not in terms of a quantity like pleasure, but rather in terms of a state of affairs like justice or national health insurance that could simply be brought about and maintained. But “maintaining” it pertains to how long it lasts, and that’s a matter of degree. To remove degrees from that dimension one would have to be aiming at a state of affairs that’s completely self-sustaining (such as the non-existence of the universe, if that’s self-sustaining). And *insofar* as the state of affairs is self-sustaining (so that perpetuating it, as distinct from establishing it, is not something we need worry about in decisionmaking), that consequentialism would give us no practical advice after the state of affairs has in fact been established.

    One might remove degrees from the first dimension by defining one’s consequentialism in terms of a one-off kind of outcome: for example, one might hold that an action is good insofar as it promotes Obama’s re-election in 2012 (so that nothing afterwards can be good or bad). Whether he wins is not itself a matter of degree.

    But that still leaves the other two dimensions.

    One might eliminate the epistemic dimension by fiat, saying that one’s consequentialism isn’t meant to be a guide for decisionmaking, but only a theory about which actions are good things to happen. But question (A) is asking about decisionmaking.

    I think there are more interesting moves to make in the epistemic dimension, but they all involve the idea that articulately maximizing reasoning is in some degree self-defeating or -effacing.

  4. Hey Bill,

    I think you’re right about this: “…if consequences are the sole motivator or justifier, in such a way that options are sometimes of unequal consequential value, then as between any two unequal options one should take the one with the better consequential value, and that’s maximization of consequential value. But that’s not the same as maximizing good consequences.”

    I think I was getting hung up on exactly this point, which led me to wonder about (A).

    I don’t think the self-defeating or self-effacing issues matter for thinking about Mozi, since he doesn’t seem interested in maximizing consequences.

    So, I guess the question I’m far less clear about is (B)–could Mozi really be considered a consequentialist, in a serious way, if he isn’t promoting maximization of consequences? Instead, his view seems to be about what the sole fundamental decision principle ought to be in any particular moment of choice; and he thinks it is choosing the policy or course of action that brings about the better state of affairs–more benefit–than the other available option(s) would.

    (Rapidly losing waking consciousness here; I’ll stop and maybe continue this thought later…)

  5. Manyul, I don’t understand.

    One way to read your last paragraph is this: you’re saying Mozi doesn’t think (a) we should maximize consequences (or maximize consequential value), because instead he thinks (b) as between any two options we should choose the one that has more good consequences (or more consequential value).

    But at the beginning of your comment you say you agree with my claim that (a) and (b) are the same thing (though for each of them the version inside parentheses is different from the version outside).

    But maybe what your last paragraph is doing instead is saying that instead of deciding that X-type consequences are good and then looking to maximize X, Mozi simply looks to maximize the having of good consequences (i.e. “benefit”), leaving it open what kind of consequence is good or even whether there is any general answer to that question.

    A third reading of your last paragraph is that you’re saying he is not trying to offer a view that will maximize consequences; rather he is saying “Always try to maximize consequences.” But that would be a distinction without a difference according to you, since you “don’t think the self-defeating or self-effacing issues matter for thinking about Mozi.”

    Mozi sounds to me like Bentham sometimes, as in this passage from the middle essay against offensive war, here in Mei’s translation from Sturgeon’s site:

    “Now, about a country going to war. If it is in winter it will be too cold ; if it is in summer it will be too hot. So it should be neither in winter nor in summer. If it is in spring it will take people away from sowing and planting; if it is in autumn it will take people away from reaping and harvesting. Should they be taken away in either of these seasons, innumerable people would die of hunger and cold, And, when the army sets out, the bamboo arrows, the feather flags, the house tents, the armour, the shields, the sword hilts — innumerable quantities of these will break and rot and never come back. The spears, the lances, the swords, the poniards, the chariots, the carts — innumerable quantities of these will break and rot and never come back. Then innumerable horses and oxen will start out fat and come back lean or will not return at all. And innumerable people will die because their food will be cut off and cannot be supplied on account of the great distances of the roads. And innumerable people will be sick and die of the constant danger and the irregularity of eating and drinking and the extremes of hunger and over-eating. Then, the army will be lost in large numbers or entirely; in either case the number will be innumerable. And this means the spirits will lose their worshippers, and the number of these will also be innumerable.”

  6. Hi Bill,

    Right; I’m not being very clear–partly because it’s not very clear in my thoughts, so I’m trying to think out loud. Let me try setting out a few things more clearly to see where I’m having trouble:

    M1) If Mozi does not hold that one ought to bring about the best consequences overall, then self-effacing and self-defeating issues don’t arise.

    If those issues arise in others’ views, it is because they take as a directly action guiding principle that one ought to bring about the best possible consequences overall. Then, deliberation about those very consequences can prevent an agent from bringing about the best consequences overall because such deliberation can itself prevent the agent from bringing about the best consequences in a variety of ways.

    M2) Mozi does not hold that one ought to bring about the best consequences overall. (So, the consequent can be affirmed.)

    Instead,

    M3) Mozi holds that consideration of benefit should be the sole consideration in deciding what action or policy to adopt in any particular circumstance.

    Now, my hunch, and what I wanted to say above, was that M2 and M3 are consistent, so that Mozi’s view, though based on consideration of benefit as the sole action-guiding principle, is not the same as a view that takes maximization of good consequences as the sole action-guiding principle.

    My question at this point is what M3 implies about systematically maximizing benefit. And here, I’m not really sure.

    The Mozi passage above seems to say, “Wars only cause a variety of harms; such harms are not beneficial; consideration of benefit dictates that one ought not to go to war (as a policy? ever?).” But couldn’t one make this argument without recourse to considerations of maximizing benefit?

    1. Wars are harmful.
    2. What is harmful is not beneficial.
    3. I ought only to do what is beneficial.
    4. Therefore, I ought not to engage in wars.

    Maybe a maximizing principle lurks here somewhere, or throughout. I have to admit I’m slightly worried about that. What do you think?

  7. What I’m not seeing is any difference between these principles:

    PB: Choose the option that (as opposed to the others) benefits.
    PM: Choose the option that (as opposed to the others) maximizes good outcomes.

    Other points that seem less important:

    I think principles or ends other than act-utilitarianism can be self-defeating or self-effacing, so I deny M1.

    Also I think there’s no sharp line between an end’s being self-defeating and its being such as to be best pursued by a somewhat roundabout means. Thus (to put the point too simply) I think some level of self-defeatingness is really common for important ends. That’s a fact of life that shapes practical thought.

    (And I’m worried about the phrase “bring about the best consequences.” I think it suggests this picture: there is a state of affairs or total future that is “the best” in general – all the ducks being in a row – and an action is good if it sufficiently causes that one. That picture is radically false to standard consequentialism.)

  8. Bill,

    The principle of benefit (PB) is indifferent among beneficial options that are equally available to the agent, only favoring any one of them over harmful options. Whereas, I take it the principle of maximizing good outcomes (PM) would regard the choice among all available options to be determined on a continuum of least to greatest benefit, so it could only be indifferent among equally “best” options as determined on that scale. So, PB is not necessarily committed to ranking among beneficial outcomes. Wouldn’t that make for a reasonable difference between PB and PM?

    The coherence of PB without PM would depend on a further story. That story could have to do with criticism of the scalability of benefits–some kind of defense of benefit-pluralism and an associated incommensurability claim. Or, it might simply be a view that has not considered scalability and maximization of benefits because, as is the case with the Mohists, that kind of talk hasn’t been broached by anyone yet. Instead, the conception assumes some kind of simpler benefit/harm bivalence.

    (Somewhat beside the point, but this reminds me a bit of Hume or Hutcheson’s derivation of at least a class of right action from considerations of what the virtue of beneficence aims at. It’s not clear that the motives involved in beneficence are maximization-directed, though they of course are directed at beneficial outcomes.)

  9. Manyul, that was crystal clear.

    You point out, and I agree, “The coherence of PB without PM would depend on a further story. That story could have to do with [A] criticism of the scalability of benefits–some kind of defense of benefit-pluralism and an associated incommensurability claim. Or, it might simply be [B] a view that has not considered scalability and maximization of benefits because, as is the case with the Mohists, that kind of talk hasn’t been broached by anyone yet. Instead, the conception assumes some kind of simpler benefit/harm bivalence.”

    Version A has real problems though.

    If the incommensurability between kinds of good is absolute, then no actions that have a positive effect on one kind and a negative effect on another can count as beneficial, no matter how small the negative effect and how large the positive. (Or they all do: so with every genocide, plant a flower.) If there are not 3 but 30 kinds of good, then options that lack such two-way consequences may be quite rare.

    Once we compromise on absolute incommensurability, bringing in approximate or fuzzy quantification, we’ve made room for max to rear his head.

    Version B seems more promising for purposes of interpreting Mozi, or at least less unpromising. I want to pore through the text more.
    ______________________

    Anyway there’s a further problem about the coherence of PB without PM.

    Suppose there is a circumstance in which the only impact someone can have is on just one of the kinds. And in that circumstance, suppose Smith has ten options, each of different consequential value, option 5 being “just stand there”; and in a largely parallel universe Jones is in the same circumstance as Smith but because of some pegs in his joints he has only the best two of the ten options (options 9 and 10). And they see all the consequences. According to PB, Smith may do 9. May Jones?

    I tend to assume that it doesn’t make sense to speak simpliciter of the consequences of action or event A; what has consequences is A-as-opposed-to-B. That is, if we simply ask “What will happen if I do A?” we aren’t asking a question that picks out some of what will happen as attributable to A and some of it as not attributable to A. (If we don’t draw such a distinction, then the question whether A is beneficial seems to be the same as the question whether the whole future history of the universe if I do A is better than nothing. And probably the future is going to be good or bad enough anyway so that either all my options pass that test or none do.)

    But we do often speak of the consequences of particular actions or events. I think in such cases we have in the back of our minds some Alternative to A — such as (a) sitting quietly, or (b) what we normally do, or (c) what we would otherwise do this afternoon, or (d) what the interlocutor has proposed — by which we measure what outcomes might be attributable to A. If PB measures the benefit of our actions using such alternatives as these, it would seem to make the truth of the statement “Option O is beneficial” depend on what tacit alternatives the conversants happen to have in mind rather than on a comparison with the range of actual alternatives to O.
    ______________________

    You don’t seem concerned about the second and third “dimensions” I named in #4, and I’m not sure why, unless it’s that you think the Mohists just weren’t having such thoughts. Which may also be an adequate answer to what I said just above the above line, for purposes of interpreting the Mozi.

    Or maybe the thought is that a focus on practical principles rather than actual results sweeps those dimensions away because it sweeps away any worries about failure or wrong guesses. But I think that thought wouldn’t be right. For if I see (as I might) that my options would yield only probabilities of future X, a principle that speaks only of what in my view “will” cause X isn’t speaking to my case.
    ______________________

    The point about beneficence is very interesting, but I don’t have a comment on that (at least not yet).

  10. In my previous comment, at the end of the paragraph beginning “I tend to assume,” I wrote: “And probably the future is going to be good or bad enough anyway so that either all my options pass that test or none do.”

    That’s not right, if there are incommensurable goods. If there are incommensurable goods, then almost certainly in almost all cases the future is neither good nor bad overall.

  11. I’m a formalism-loving MF, so maybe I’ll see if I could make some sense of what you’re asking.

    Your questions:

    (A) Q: Can it simply be assumed that if someone thinks beneficial consequences should be the primary motive for acting, or rationale for policy, that she also thinks maximizing such consequences should be the primary motive?

    (A) A: Let’s assess the meanings of “beneficial consequences should be the primary motive for acting” and “the maximization of beneficial consequences should be the primary motive.” The first only states that there should be a quantity of beneficial consequences, while the second says that the quantity of beneficial consequences should be the highest it can possibly be. That points me to answer no, the reason being that people can seek beneficial consequences as their primary motives, but not want them to be at the highest amounts.

    To come up with a sort of anecdotal example, we may cite a situation of a gambler who has a 99.99% victory rate, and could essentially choose whether he won or lost. A gambler of this sort never wants that kind of victory rate, since anyone with basic math sense would never continue gambling against such a person. A good gambler, then, would want some other people to win, just enough that he could play against them and win the money back later. In this sense, he doesn’t want to maximize benefit, not for himself nor for everybody else, since in these instances he would either have no competitors, and thus would be denied a chance to increase his own profit, or he would have no money with which to continue betting, and thus would be denied a chance to increase his own profit. Thus, he could want to have a high bit of victory for himself and some others, but he wouldn’t want the most people to be victorious possible, since that would disperse the number of games he had to play.

    (B) Q: If maximizing consequences weren’t part of the Mohist view, would it still count as a form of consequentialism?

    (B) A: I think my above example cites a consequentialism that is non-maximal, but I should give a more direct answer.

    If consequentialism is the belief that the value of an action is entirely determined by the value of its consequences (Oxford Dictionary of Philosophy), there is nothing that strictly observes that the value must be highest to be truly valuable.

    It may burst your Mohist bubble, but it was Yang Zhu, a bitter rival of Mozi’s thought, who was actually quite important in making matters of these quite distinct:

    “A grand house, fine clothes, good food, beautiful women — if you have these four, what more do you need from outside yourself? One who has them yet seeks more from outside himself has an insatiable nature. An insatiable nature is a grub eating away at one’s vital forces” (Graham).

    I think you’re right that B does make an interesting distinction, where people want moderately good things to happen, but don’t want things to be the best they can possibly be.

    I don’t want to be trite, but my favorite childhood and adult television show ever (“Duckman”) contains an episode that concludes something along these lines: “The fundamental paradox of modern society is that the perfect world is actually an imperfect world, since it is the imperfections in the world that give people the drive to make things better.”

  12. Joshua, I agree with most of what you’ve just said. You’ve brought us back to the original question. All of my ravings above are a little bit off the topic of Manyul’s question (A), since I was talking about the sole-motive case rather than the primary-motive case.

    If we’re talking about the sole-motive case, then it seems fair to say that as between any pair of options between which I should have any value preference, my preference should be based on consequential value.

    But if we’re talking about the primary motive case, then we can fairly say that my primary motive in shaping my life – i.e. my primary motive simpliciter – doesn’t have to be the primary motive for each of my value preferences as between pairs of options. (If it did, then it’s not clear that there would be a distinction between the sole motive view and the primary motive view.) Other considerations can come in at the margins.

    The good gambler you describe sounds like someone trying to maximize his long-run take, and who isn’t confusing that with maximizing his victory percentage (overall or in the short run).

    I think it’s unnatural, usually, to think in terms of overall pleasure or happiness rather than in terms of external goods such as houses. But I think when we do think in terms of houses, we tend to be thinking tacitly in terms of the happiness they generate. Still our tacit concern to maximize happiness can be reflected in non-maximizing concerns about external goods – by a remote analogy to the good gambler. How about that?

    (My complaint about the Oxford dictionary definition of consequentialism is that it is too quick with the notion “the consequences of an action”.)

  13. Joshua, I’ve decided that I agree with more of what you’ve said than I’ve so far said, and I want to try to formulate it clearly enough to distinguish it from what I don’t think.

    To keep things racy, I’ll speak in terms of hedons – units of pleasure (intensity multiplied by time). Two hedons plus one negative hedon equals one net hedon.

    Here are three versions of Benefitism:

    TOT: We ought to do everything we can to produce as much pleasure as possible in the future of the universe, up to a quadrillion net hedons — minus a hundred per year from the first issuance of this statement, of course, as the future shortens. Nothing above that level has any intrinsic value (though of course maximizing our chances of producing the quadrillion may involve producing more in fact).

    ANN: We ought to do everything we can to get the human species up to a net haul of at least a thousand hedons each year, and keep it there. Any higher haul in any given year has no additional intrinsic value (though of course maximizing our chances of achieving the net thousand level may involve achieving a higher average level in most years in fact).

    AVG: We ought to do everything we can to get the human species up to an average of one net hedon per year per person, and keep it there. Any higher aveage in any given year has no additional intrinsic value (though of course maximizing our chances of achieving the one-per-person average may involve achieving a higher average level in most years in fact).

    There’s nothing incoherent in any of these theories, so far as I can see at the moment. And in some sense they are indeed non-maximizing forms of Benefitism. Whether they’re forms of consequentialism I don’t know.

    I think you might have been making a point like that, though in language too plain for me to understand.

    I want to say, though, that there’s a thinnish sense in which these theories are maximizing theories, and an important sense in which two of them are.

    All three are maximizing in the thinnish sense that each says we should choose the option whose reasonably expectable chance of promoting the end is greatest. At least, none of them suggests indifference between two options when one option has a slightly greater chance of helping than the other option.

    ANN and AVG are maximizing in a thicker sense. Each of them defines the intrinsically good outcome X in a complicated way that saliently involves not maximizing something that *another* famous sort of theory regards as the intrinsically good stuff. But that doesn’t mean that ANN and AVG don’t ask us to maximize the kind of consequence they value. They ask that a certain state of affairs be approached as far as possible (perhaps completely) for as long as possible.

    Of course if the Mozi reflects a view like that, then it wouldn’t necessarily be misleading to say that it is a non-maximizing kind of view.

    Whether my three Benefitisms should be regarded as kinds of Maximism or Consequentialism from the point of view of Manyul’s inquiry depends, I suppose, on the reason for the inquiry. For example, the reason might a wish to defend the Mozi against the kind of anti-consequentialist argument that says a maximizing view leaves us no moral vacation time or space; it asks us always to be on the lookout for the extra increment, and thus threatens our commitments to principles and to people. If that’s the point of the inquiry, then it seems to me even TOT is relevantly a maximizing theory.

  14. Bill and Joshua,

    Given a fixed end and a set of options for approaching that end, ranking those options and then choosing from among them in order to approach closest to (to “maximize”?) that end shouldn’t be called a “thin” form of maximizing. That seems to me to promote an unhelpful conflation of plain practical rationality (choose options that best promote my ends–a kind of “means” maximizing) with a traditional consequentialist principle of maximizing outcomes (“ends” maximizing). So, I would suggest calling the one “means-maximizing” and the other “ends-maximizing” because one doesn’t simply seem to be a thin form of the other. Maybe that’s a minor point.

    I guess what I meant to suggest in 9 was that a principle of beneficence could be indifferent among “end” options that were all beneficial, whereas, having chosen a beneficial end, I would agree that practical rationality encourages (requires?) choosing means that most effectively promote that end.

    Now among TOT, ANN, and AVG I take it the differences lie in the hedonic levels–the ends–at which to aim. It seems to me like the beginning phrase for each, “We ought to do everything we can to get the human species up to…” is unneccesarily introduced since it only makes explicit the point that practical rationality encourages/requires taking effective means toward the ends that we have chosen. But whether we should choose an end constituted by 1 hedon per person or 1,000 total hedons overall seems like a choice that isn’t forced by a principle of benefit. It’s only with the addition of some reductive principle that makes benefit into a summable commodity that such choices of ends are coherent. (Maybe that’s pushing things a bit far, but I feel like the current dialectic requires it.)

  15. Hi Manyul,

    I agree that the Mozi can argue mainly in terms of contributions to e.g. social order, prosperity and population without relying on a view that we ought to maximize those things. I don’t think the Mozi is interested in the idea of ultimate goods – things that are good not because of further goods. (I think a better model for the Mozi’s concerns is Rawls’ notion of primary goods. Without the relativization to individuals.)

    I think you and I have pretty deep disagreements about words (including ‘rationality’ and ‘end’). For two reasons I’m not sure we have significant other areas of disagreement. One reason is that I’m not sure what you mean in the last two unbracketed sentences of #15. The other reason is that I’m not sure what would make a disagreement significant here. What kind of disagreement would be significant, what kind of maximizing is relevant to your question, depends, I think, on the reason for the question – unless the reason is the noblest of all: that you are being a heroically hospitable bloghost, simply raising interesting questions.

    I think in #15 you are saying that your main questions about maximizing are questions about maximizing end-stuff, not about the shape of rationality. (That seems to be a departure from the view at the end of the original post.) Hence my accounts of the practical principles TOT, ANN, and AVG obscure the issue you want to focus on. I’ll concede that point purely for the sake of argument. So I ought to remove all implicit reference to “rationality” from the accounts of the practical principles TOT, ANN, and AVG, leaving mere accounts of ends. I’ll do that now by removing all implicit reference to probabilities (objective or epistemic), yielding not three practical principles but rather simply three ends. The three ends turn out to involve maximizing – especially do the ones beginning with A.

    Let me sneak up on it just a bit. What is the “end” of hedonist act utilitarianism? Here are two possible answers:

    (a) net pleasure (i.e. pleasure minus unpleasure)
    (b) that there be as much net pleasure as possible.

    I’m not sure which of these you find more congenial. I think (a) is way too vague. I think (b) is more accurate. Note that makes no reference to probabilities (objective or epistemic).

    Suppose my end is “at least 100 grapes.” That account is ambiguous in lots of ways: for example, it doesn’t say whether that end is *ultimate* or not. I want to point to one ambiguity in particular: If the full count of 100 grapes is quite beyond reach, do I have any end-preference for 90 over 80 grapes? The formula doesn’t say, and nothing about practical rationality directly settles the matter. (For example, the 100 grapes might not be my ultimate end; I may want them only because they’re the ticket for admission to a show, and 99 grapes just won’t get me in.) Thus we can distinguish two Different Ends a person might have, as follows:

    (c) at least a hundred grapes
    (d) as many grapes as possible up to at least a hundred (thereafter never mind)

    Someone whose end is (c) but not (d) would be interested in “approaching” (c) in the sense of making (c) more and more likely, but not in the sense of preferring 90 grapes to 80 at the end of the day.

    Now here are the Ends of TOT, ANN, and AVG, purged of all implicit reference to rationality or probability:

    TOT-END: That there be as much pleasure as possible in the future of the universe, up to a quadrillion net hedons — minus a hundred per year from the first issuance of this statement, as the future shortens. Nothing above that level counts.

    ANN-END: That there be as many hedons per year as possible up to a net of at least a thousand hedons each year, and for as long as possible. Nothing above that level counts.

    AVG-END: That the human species on annual-average have as much net pleasure as possible up to at least an average of one net hedon per year per person, and for as long as possible. Nothing above that level counts.

    (Incidentally, only AVG-END is limited to humans—and that’s not an arbitrary difference.)

    The maximizing appears in ANN-END and AVG-END in the phrase “for as long as possible.”

    Unlike those two, TOT-END doesn’t in principle involve maximizing. But it does in practice, as would any end like TOT-END, no matter how low the number of hedons. For no matter how low we set the number, there will usually be near-total uncertainty about how many negative hedons in the distant future we have to outweigh in the shorter term in order to reach the Total.

    One way to remove the in-principle maximizing of ANN-END and AVG-END is to move to a kind of temporally relative end (TOT-END was slightly temporally relative already):

    TEMP-END: That there be a thousand hedons (or an average level of one hedon, or whatever) over the ten years immediately following the action under consideration.

    Each of these formulae is, of course, completely absurd because saliently arbitrary. What they do, I think, is help map the conceptual terrain lying between classical maximizing consequentialism and the kind of robustly non-maximizing theory you want to construct. And maybe you would count some of these theories as solid examples of what you’re calling a “non-maximizing” view, in which case maybe they count as demonstrating the theoretical possiblity of such a view.

    Nice but non-maximal levels of external and extrinsic goods (such as houses, social order, prosperity, population, and victory percentages in poker) are not so arbitrary. Because we sense that they’re justified by the maximizing of reasonably expectable net pleasure!

    P.S. I don’t agree that practical rationality is to “choose options that best promote my ends.”

  16. I’m thankful that you brought up something quantifiable like hedons in 14, since it’s much easier for me to formalize my view through this mechanism.

    Let’s assume a finite scale for measuring hedons, such that it would be either pointless or otherwise impossible to increase the happiness one feels over some duration. I want to regard this domain more specifically as the highest and lowest possible number of hedons that can be produced by our actions/rules/policies.

    Now, we’ll judge any action/rule/policy imperative (Ix) if, and only if a produces some hedons (Pxn), there is another act that produces some hedons (Pym), and the number of hedons a produces is greater than those b produces. This is how I interpret “beneficial consequences should be the primary motive for acting.” This instantiates between two options (a vs. b), but this could be generalized to incorporate all acts/rules/laws if we so required.

    There is also an issue of defining separate domains of discourse within a proposition, but we’ll assume that hedons use integers as constants, while the actions/policies will remain constants.

    (∀x)(Ix ≡ (∃n)[(∃y)(∃m){(Pxn & Pym) & (n > m)} ])

    However, we want to know about the fairness of the implication that “maximizing such consequences” (those equivalent in hedons to n, and therefore, fittingly under the same scope of n) “should be the primary motive,” and while we are asking the same question over its claim that it is imperative, we must consider the necessary condition that it be bigger, that no o exists that another action z produces that is greater than n.

    …~(∃o)[(∃z)(Pzo) & (o > n)]

    Now, we’ll want to add this necessary condition to the original to get…

    (∀x)(Ix ≡ (∃n)[(∃y)(∃m){(Pxn & Pym) & (n > m)} ⊃ ~(∃o)[(∃z)(Pzo) & (o > n)]])

    From here, I’ll follow an ordered instantiation that I presented in my blog a while back. Remember, our domain of actions and such can be replaced with regular constants, while our domain of hedons is an ordered set of integers.

    Can it simply be assumed that if someone thinks beneficial consequences should be the primary motive for acting, or rationale for policy, that she also thinks maximizing such consequences should be the primary motive? Well, no, because the only way that we assure that the antecedent assures the conclusion remains true is if we refuse to accept a scale where o > n. We have no basis for such refusal.

    If we grant the refusal, then it’s fine to get valid answers, as a method I devised on my own blog shows (exposing an error to my own system when number relations added, requiring an extra rule to avoid such a presumption from messing with the premises).

    Valid option, falsely presumed equivalence (partially seen here as ‘~(3 > 3)’).

    (∀x)(Ix ≡ (∃n)[(∃y)(∃m){(Pxn & Pym) & (n > m)} ⊃ ~(∃o)[(∃z)(Pzo) & (o > n)]])
    (∀x)(Ix ≡ (∃n)[~(∃y)(∃m){(Pxn & Pym) & (n > m)} v ~(∃o)[(∃z)(Pzo) & (o > n)]])
    (∀x)(Ix ≡ (∃n)[(∀y)(∀m)~{(Pxn & Pym) & (n > m)} v ~(∃o)[(∃z)(Pzo) & (o > n)]])
    (∀x)(Ix ≡ (∃n)[(∀y)(∀m)~{(Pxn & Pym) & (n > m)} v (∀o)~[(∃z)(Pzo) & (o > n)]])
    (∀x)(Ix ≡ (∃n)[(∀y)(∀m)~{(Pxn & Pym) & (n > m)} v (∀o)[~(∃z)(Pzo) v ~(o > n)]])
    (∀x)(Ix ≡ (∃n)[(∀y)(∀m){(~Pxn v ~Pym) v ~(n > m)} v (∀o)[~(∃z)(Pzo) v ~(o > n)]])
    (∀x)(Ix ≡ (∃n)[(∀y)(∀m){(~Pxn v ~Pym) v ~(n > m)} v (∀o)[(∀z)(~Pzo) v ~(o > n)]])
    (Ia ≡ (∃n)[(∀y)(∀m){(~Pan v ~Pym) v ~(n > m)} v (∀o)[(∀z)(~Pzo) v ~(o > n)]])
    (Ia ≡ [(∀y)(∀m){(~Pa3 v ~Pym) v ~(3 > m)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [(∀m){((~Pa3 v ~Pam) v ~(3 > m)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [{((~Pa3 v ~Pa3) v ~(3 > 3)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [{((~Pa3 v ~Pa3) v ~(3 > 3)} v [(~Pa3) v ~(3 > 3)]])
    (Ia ≡ [{((~Pa3 v ~Pa3) v ~(F)} v [(~Pa3) v ~(F)]])
    (Ia ≡ [{((~Pa3 v ~Pa3) v T} v [(~Pa3) v T]])
    (Ia ≡ [{((~Pa3 v ~Pa3) v T} v [(~Pa3) v T]])
    From here, it should be clear that no matter what the truth-value of the production of action a, it will guarantee the result because of the overly restrictive presumption made above, and so…
    (Ia ≡ T)

    When the presumption is correctly annulled (here for numerical constants), there are invalid results.

    (∀x)(Ix ≡ (∃n)[(∀y)(∀m){(~Pxn v ~Pym) v ~(n > m)} v (∀o)[(∀z)(~Pzo) v ~(o > n)]])
    (Ia ≡ (∃n)[(∀y)(∀m){(~Pan v ~Pym) v ~(n > m)} v (∀o)[(∀z)(~Pzo) v ~(o > n)]])
    (Ia ≡ [(∀y)(∀m){(~Pa3 v ~Pym) v ~(3 > m)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [(∀m){(~Pa3 v ~Pbm)) v ~(3 > m)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [(∀m){(~Pa3 v ~Pbm)) v ~(3 > m)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [{(~Pa3 v ~Pb2)) v ~(3 > 2)} v (∀o)[(∀z)(~Pzo) v ~(o > 3)]])
    (Ia ≡ [{(~Pa3 v ~Pb2)) v ~(3 > 2)} v [(∀z)(~Pz4) v ~(4 > 3)]])
    (Ia ≡ [{(~Pa3 v ~Pb2)) v ~(3 > 2)} v [(~Pc4) v ~(4 > 3)]])
    (Ia ≡ [{(~T v ~T)) v ~(T)} v [(~T) v ~(T)]])
    (Ia ≡ [{(~T v ~T)) v ~(T)} v [(~T) v ~(T)]])
    (Ia ≡ [{(F v F)) v F} v [(F) v F]])
    (Ia ≡ [{(F)) v F} v [(F) v F]])
    (Ia ≡ [{F} v [F]])
    (Ia ≡ F)

    Well, I got my claim across, and at the same time got to repair two major flaws (the other dealing with the treatment of literals of mathematical relations) previously present in my own project, so that’s a plus.

    The above shows that the conditional statement is consistent, but as we all know, consistency is not the safest bet nor a solid guarantee.

    This, I think, answers the “ends-maximization” end, but I’d need some more explanation about this “thin” or “means” maximization before I made a good assessment of it.

  17. Hi Joshua,

    Here’s an attempt at a paraphrase of your main argument. I’m not at all sure I’m on the right track.

    “Aiming at beneficial consequences doesn’t imply aiming at maximal available consequences, because sometimes the maximal available consequences are so bad as not to be beneficial at all. They’re below zero.”

    If that’s the gist, then here’s my reply.

    The argument relies on an assumption that it makes sense to assign absolute numbers to the consequences of actions, rather than relative numbers (i.e. what difference it would make that we do A rather than B). I argue against that assumption in the second part of my #10.

    Below the line here are the difficulties I had in understanding the precise formal argument.
    ________________________________

    You write: “Now, we’ll judge any action/rule/policy imperative (Ix) if, and only if a produces some hedons (Pxn), there is another act that produces some hedons (Pym), and the number of hedons a produces is greater than those b produces.”

    This implies that an action is imperative if there is one consequentially worse option, no matter how many better options there might be. A few lines later it turns out that’s not what you mean. We are to suppose either you are talking only about the hypothetical case where there are in fact exactly two options, or that you are talking only about relative imperativeness (i.e. that one option is preferable to another, not that it is choiceworthy simpliciter). Fine, up to this point.

    In the second part of #10 I argued in effect against this kind of formalization—assigning numbers to actions, on the grounds that consequences are relative to alternatives. But I think that doesn’t matter as long as what matters is only the differences. So: fine.

    I’m not sure I understand this: “There is also an issue of defining separate domains of discourse within a proposition, but we’ll assume that hedons use integers as constants”. I think it means roughly: “There’s a worry about the fact that talk of the consequences of alternatives involves counterfactuals; but let’s just set that aside and suppose each option is associated with some integral number of hedons.” In which case, fine, though offhand real numbers would seem to do as well.

    A few lines later I don’t understand this: “maximiz[e] … consequences [that are] equivalent in hedons to n”.

    And then you introduce the possibility of a third option, z. So you are retracting your original claim (the one that depended on ruling out the possibility of a third option)? That leaves me feeling I don’t understand the terms of the formal formulae.

  18. It’s okay. I’m used to unpacking.

    First, you are right that there is a mistaken editing I forgot to generalize all of my statements. “This instantiates between two options (a vs. b), but this could be generalized to incorporate all acts/rules/laws if we so required.” That sentence has no point anymore, since I clearly decided to generalize the statement with quantifiers as evidenced in the formala, itself. Thanks for clearing this oversight. I made a major edit mid-write to avoid that presumption, and while the formal statement does have the correct generalizations, the explanation of it is misleading, but it’s my fault for hitting the submit button prematurely.

    I simply did not proofread above myself.

    That aside, I’m afraid you do not have my gist.

    I think you confused my letter “o” with the number 0, and if that’s the case, that should definitely clarify things. As you see, I instantiate a completely different number for o, so I don’t want to leave the impression that anyone with a sincere logical mind can simply replace zero with any other number and things will be honky dory.

    The scale in hedons can be altered to whatever scale you want. It can even be negative if you want. Mine doesn’t do this, but it’s fine.

    My point, I think, is better drawn mathematically.

    In order to guarantee the consequent of the above statement, you have to make some assumptions about the scale by which you are calculating your hedons.

    If your hedons only have one or two integer values, then the relations can only be such that the highest number (out of the two) is greater than the lower. It really doesn’t matter what the values are, but the number of values in the ordered set with two unequal numbers ({-1,6} {1,2}, {0,1}, {15,25}). Now, mine introduces those ordered number sets with only one value (here {3}), but the point is that a relative statement with two or fewer values is actually an absolute statement.

    Actually, to your point, but against your disagreement with me, we have no reason to assume that the scale is this large or small, that the domain only encompasses these two values. What I show in the invalid proof is that if we give even a three-valued scale ({2,3,4}) and put o at the top, then the result is invalid.

    My statement, “There is also an issue of defining separate domains of discourse within a proposition, but we’ll assume that hedons use integers as constants,” is a technical one. In a formal logic, if we have two separate domains of discourse, we don’t want to be able to instantiate different domains together. So, for instance, if I have a predicate that talks about both time and animals, I don’t want to have vague instantiation rules that allow me to switch one for the other. It would allow formal sentences to translate quite funnily.

    Actually, I rule out there being a third option, z, that, if it truly existed, would generate hedons o. The point is that there is not some other option that produces a different level of hedons that is likewise better than the option that produces n.

    What the implication does, most simply, is state, “a is a good option only if a is the best option,” spoken in terms of accumulation of hedons, our granted scale for “accumulation of good consequences,” for which there will be a maximal number (as I stipulated that the scale would be finite).

    Sorry for throwing you off with that needless, earlier sentence. I hope this clarifies what fact the proposition expresses. I think it actually satisfies most of your concerns, which which I mostly agree, quite cleanly. (Notice how the natural language, not the formal language, caused the bungle? That makes me chuckle a bit.)

  19. Thanks Joshua! You’re exactly right about the source of my confusion!

    I have a new guess about your broad point. Maybe it’s this: “Maximism ranks our options by how much X they cause, and says, ‘Do the one that causes the most.’ Benefitism ranks our options by how much X they cause, and says ‘Do any of these that causes more than Y amount of that.’ Clearly Benefitism so understood does not imply Maximism so understood.”

    If that’s it, I have three remarks:

    First, I agree that Benefitism so understood does not imply Maximism so understood.

    Second, I think Manyul may be concerned mainly about whether it is possible to give a particular version of Benefitism-so-understood that isn’t too arbitrary to be plausible.

    Third, maybe we should be indifferent about whether we understand Maximism in one or the other of the following two ways:

    MA: It is morally imperative to do the option that causes the most X.

    MB: The more X an option causes, the morally better that option is.

    Of course neither of these is implied by Benefitism as defined above.

    ___________________________

    If my new gist-guess is wrong then I’ll want to work more on understanding your #18 and #20.

    After long and I think mostly successful interpretive labor over your #18 I had given up in the middle of the sentence that begins “However, we want to know”, and I didn’t get to the part of that sentence that introduces ‘o’! I did glance over the rest very quickly, hence my mistake about o/0 later on. It would be helpful to me if you’d explain the bit I quoted in the penultimate paragraph of my #19, so that I can proceed beyond that point in #18.

    I’ve worked through the whole of your #20 but don’t yet understand it:

    1. You write, “In order to guarantee the consequent of the above statement, you have to make some assumptions about the scale by which you are calculating your hedons.” I’m guessing the conditional proposition you have in mind is this: If beneficial consequences should be the primary motive for acting, or rationale for policy, then maximizing such consequences should be the primary motive.

    2. I’m not sure I understand this: “If your hedons only have one or two integer values”. My idea is that a hedon is a constant, like an inch or a bushel. I think you mean “If there are only one or two numbers of hedons that can possibly be the complete number of hedons in the consequences of any given action”.

    3. I don’t understand this sentence: “It really doesn’t matter what the values are, but the number of values in the ordered set with two unequal numbers ({-1,6} {1,2}, {0,1}, {15,25}).” The part before the ‘but’ seems fine *if* my paraphrase in 2 above was right; but the part after the ‘but’ just seems to be a noun phrase (referring to the number 2, or the number 4, or the number 7).

    4. I don’t understand this: “Now, mine introduces those ordered number sets with only one value (here {3})”. I asssume “mine” means “my formal argument,” and I think probably “those” does not refer to the particular sets you had just listed. But I’m stumped about the meaning of the whole sentence.

    5. I’m not sure I understand this: “a relative statement with two or fewer values is actually an absolute statement.” My guess is that in our hedonistic context it amounts to this: “Both (a) if on a given occasion none of my options makes a difference, then they have an absolute value (viz. zero), and (b) if on a given occasion my options can make only one sort of hedonic difference, viz. determining which of two numbers of hedons there is going to be, then each of those two numbers may fairly be assigned as the absolute hedonic consequence of each of the options associated with it.”

    5. I don’t understand “that the domain only encompasses these two values.” What two values? Or should I just drop ‘these’?

  20. Manyul and Joshua,

    Near the beginning of #21 I described a kind of Benefitism and asked if it was what Joshua had in mind. The way I phrased it left something to be desired.

    The basic idea was that Benefitism divides our (innumerable) options at any given time in such a way that on the approved side of the line are not only the maximal option but also many non-maximal options.

    But actually I think this kind of Benefitism is not the sort of thing Manyul is talking about. For convenience I’ll call it Xbenefitism. Here are two examples of Xbenefitism:

    XB1: Rank your options by their relative causing of net pleasure, and do any that are better than zero.

    XB2: Rank your options by their relative causing of net pleasure, and do any of the top 50.

    XB3: Rank your options by their relative causing of net pleasure, and do any whose effects are bettter than halfway between the effects of the worst and the effects of the best.

    XB1 is incoherent, because it assumes that options have absolute rather than relative consequences. XB2 is implausible, because it makes morality depend too much on how we happen to individuate options. XB3 is implausible, because it means that an option that may be solidly forbidden to me suddenly becomes OK if I happen to acquire a vastly more horrible option.

    More importantly from Manyul’s point of view, if I understand him, all three of these theories are still thinking in terms of maxima when it comes to consequences. That’s why I think they don’t count as versions of the Benefitism Manyul is interested in ascribing to the Mohists.

    Thus, despite the talk about practical principles, I think maybe Manyul is more interested in ranking or evaluating end-states than in ranking options. It doesn’t make much sense to talk about ranking or evaluating options while abstracting from questions of probability.

  21. Hey guys,

    I’m about to embark on a 12 hour odyssey to return home from North Carolina, but I feel like I should say the following quick thing:

    Two claims here that I think I have in the fire:

    (1) The Mohists aren’t interested in the finer points of maximizing, or maybe even in the cruder points.

    (2) The Mohists generally treat benefit and harm as discretely bivalent rather than as relative (i.e., rather than as comparative extremes on a continuous scale).

    I think 1 is surface-textual obvious, though that doesn’t settle the case if we are fearlessly reconstructing the Mohist view as philosophers. 2 is probably also textually supported, unless I’ve missed a crucial passage in the Mozi, but that also might not settle the case. However, I think it might be possible to argue that if 2 is true, it indicates something deeper about li (利) and hai (害), namely that they really are “either-or” concepts in early China. That would be interesting, if true, and would–I think–settle the issue about whether maximizing benefit would even make sense for the Mohists (it wouldn’t, on this scenario). We’d have to see how the pair of concepts are treated more globally in early Chinese texts, of course.

    Maybe that helps for understanding why I’m interested in this web of questions. Anyway, let me know if I’ve skipped too quickly through my thinking here.

    I’ll check in again in about half a day!

  22. Hi Manyul!

    When the cat’s away …

    I agree about (1). Regarding (2), if I understand you, you’re thinking that maybe for the Mohists (implicitly) there are no degrees of benefit or harm and therefore there is no such thing as maximizing benefit or harm.

    Offhand that looks like too weird a view about li 利 and hai 害 to attribute to them without pretty direct textual evidence. It implies that there is no such thing as my benefiting someone further, or more than I did before, or more than I benefit someone else (whom I benefit), and benefiting many people isn’t more beneficial than benefiting a few.

    In #6 I quoted a long passage from the Mozi; and despite the salience of the term “innumerable” therein, I said it reminded me of Bentham. Numbers wouldn’t make sense in that context anyway, since the passage is generalizing about war. Still the difference between many and few seems to be very much on the author’s mind.

    One can believe in an absolute (i.e. nonrelative) distinction between benefit and harm without believing that there are no degrees of benefit and harm. All one has to do is be a little simplistic about the idea of doing nothing, so that one thinks there is such a thing as an option that has no significant consequences. That’s the zero. Within limits there’s nothing terribly wrong with that idea, even though it’s a problem if one is trying to theorize about fundamental principles it’s a problem.

    These days it’s harder to be simplistic about that than it used to be. These days we feel embedded in trajectories for which we are partly responsible. Even though I’m just sitting there, I might be on a bicycle coasting south on a barge moving west. Our projects are embedded in economic cycles and other long term trajectories of technology and community and whatnot, in which there’s no such thing as doing nothing. “Silence = death” and refraining from changing your light bulbs could kill your grandchildren. When we lack a robust conception of doing nothing, we have to think of consequences as being essentially relative to alternatives.

    Jesus once said “Those who aren’t with me are against me,” and on another occasion “Those who aren’t against me are with me,” thus contradicting himself on how to reinterpret the neutral position. Anyway he was a trajectory person. He thought in terms of holy history and its imminent acceleration.

    Confucians sometimes theorized about trajectories such as systematic decline, and had unkind things to say about ordinary virtue. They didn’t like it because they saw it as worse than an urgent alternative, as one kind of red displaces another.

    But I wonder if the Mohists saw things in terms of trajectories that reinterpret the scene. The world they envisioned may have been simpler.

    In a world without grand trajectories for which we are partly responsible (or that we are responsible for counteracting), the idea of doing nothing can be fairly intelligible. so that the idea of an objective line between overall benefiting and harming (of the world at large or of particular people) might have made sense without conflicting with the idea that there are degrees.

    Furthermore, in a world without grand trajectories at all, it is easier to overlook the distinction between (a) doing something that has better consequences than doing nothing and (b) doing something that makes things better than they were. So in that simple world one can think this simple thought: “You can make things better, you can make things worse, or you can sit on your hands.”

  23. I said the non-degree view “implies that there is no such thing as benefiting something further.” Of course that isn’t strictly true. One could certainly benefit someone twice, on the non-degree view. Would that be better than benefiting her once? The problem is that actions don’t come pre-packaged as units. Big ones (like conducting offensive warfare) divide into smaller ones.

  24. There is indeed a huge difference between addressing conceptual questions A and B about benefitism and addressing interpretive questions about the Mozi.

    Sometimes by ‘benefit’ above you have meant not what one does (to li 利), but a kind of outcome (li 利). I’m not sure which you have in mind when you talk about not admitting of degrees. I think the issues in the two cases are different.

    One thing that seems pretty clear is that at least some of what the Mozi conceives as Desirable Outcome saliently admits of more and less, with no obvious limit: population and prosperity. I don’t recall any attention in the Mozi to the idea that worthwhile prosperity has an upper limit (as Aristotle argues in Politics 1). Social order arguably has an upper limit, but the Mozi shows signs of thinking social order is good because of its further benefits. So offhand it looks as though the task of construing Mohism as benefitism rather than maximism should start by granting that good outcomes are conceived in terms of stuffs that admit of degrees and that the Mozi isn’t getting much mileage out of the idea that these stuffs have upper limits.

    As you have pointed out, maximizing might still not make sense, if there are incommensurable kinds of final value. Benefiting would not make much sense then either, but there is still room for the interpretive idea that the Mozi didn’t proceed beyond noticing a multiplicity of goods: didn’t proceed as far as questions of comparability, the idea of a unified ultimate cash value such as pleasure, or the idea of overall maximization. One can say all that without introducing the idea that benefit doesn’t admit of degrees.

  25. Hello, Manyul.

    If you stick to this assertion: “(2) The Mohists generally treat benefit and harm as discretely bivalent rather than as relative (i.e., rather than as comparative extremes on a continuous scale),” as you do in #24, then my first proof will show that “maximal benefit” and “benefit” are actually synonymous terms.

    My proof in #18 shows exactly this, and I argue that exactly such a presumption is needed to make the statement valid. However, I also show that more thorough utilitarians need not restrict themselves to a bivalent determination of benefit, and that once the scale of “hedons” (or however benefit is calculated in the consequentialist system) becomes multi-valent, that the implication on which you inquire in (a) is not a guaranteed tautology.

  26. Joshua, I’m going to try to paraphrase, to make sure I understand:

    Necessarily every good outcome is the maximally good outcome
    –iff–
    necessarily there are no more than two levels or values of outcome (good and bad).

    Have I finally caught your gist?

    I think Manyul took the view that under the two-level condition the notion of maximal value makes no sense.

    I’m not sure either of you would regard that as a significant disagreement. For I think the main point you want to make, Joshua, is not about what we should say in the 2-level case, but rather that where possible outcomes have more than two levels of goodness, so that ‘best’ makes sense, being good is not automatically the same as being best.

  27. Hey Joshua,

    Bill’s got me right; I’ll just add that this is partly a “natural” language issue–“best” has conversational implicature of some sort of scale on which the number of values is greater than 2.

  28. Hi Manyul and everyone

    I’m not sure the Mohists don’t advocate maximizing benefit.

    Their slogan is 求興天下之利, 除天下之害 (seek to promote/further/raise the benefit of the world and eliminate harm to the world).

    The verbs 興 and 除 arguably include the idea of maximization.

    The “Greater Selection” (Book 44) discusses cases in which we determine what to do by weighing the greater of two benefits or lesser of two harms. This too suggests that the benefit/harm distinction is implicitly understood as one of maximizing benefit, at least for some later Mohists.

  29. Bill, you get my gist. It’s just in a formal gown, is all.

    Manyul, if you agree with Bill’s assessment of what you believe, then my formalization (or a somewhat deliberate tinkering with the section of proof given) proves that more directly.

    My only disagreement is elsewhere, that we don’t really have any clear reason or another to accept one scale over another in consequential measures of this sort, and while a 2-scale 是非 deal really does make a sort of 黑白 distinction, anyone interested in more subtle rankings of actions, rules, or whatnot, may opt for a wider range.

    “This is good on a 2-scale measure,” and, “This is the best on a 2-scale measure,” I think, is a rhetorical trick built on a sort of vacuous truth.

  30. Very interesting. I’ve long thought that the biggest problem with utilitarianism is its failure to distinguish utility from harm. If Mozi does make this distinction, that would be cool.

    To answer the questions: I think they both depend on an assumption that a moral system picks out one single best course of action from all the possible courses. If we discard this assumption, then the questions become easy.

    You don’t have to maximize benefit, because the system does not (try to) determine a single best action; and such a system absolutely is consequentialist, because the value of an action is still determined by its consequences, but the value is not in a numerical, rather it’s in a qualitative form, i.e. are the consequences li?

    But that step might be too far: does Mozi ever do comparisons of different benefits (as opposed to contrasting benefits with harms)? If he does, then some kind of ranking system will be necessary (though not necessarily a numerical one).

  31. Hi Phil, I have two questions.

    1. I don’t understand why you think utilitarianism doesn’t distinguish utility from harm.

    If we’re using ‘harm’ in the broad sense of “bad consequences,” then I don’t see that utiltiarianism needs to distinguish (negative) utility from harm. But if we’re using ‘harm’ in the ordinary sense of “damage,” then I don’t see that it’s a term in utilitarianism at all, so the distinction is not utilitarianism’s job. Or am I completely missing what you’re saying?

    2. At first glance to me your second paragraph seems to be drawing the distinction between these two views.

    MA: It is morally imperative to do the option that causes the most X.
    MB: The more X an option causes, the morally better that option is.

    But your third paragraph seems to have in mind a distinction between a “qualitative” and a “numerical view, with MA being the numerical one. Whereas I see MA as being marginally less numerical than MB. They both have to quantify X, but MA generates only a qualitative conclusion: right v. wrong, whereas MB generates quantitative comparisons: one option is more or less good than another.

  32. Hi, Bill.

    I’m not a proper philosopher, so my terminology is hazy here, and I could be just wrong, but my understanding of utilitarianism is this:

    A utilitarian ethics determines a value for any action dependent on the benefits (utility) that would accrue to (all) people as a result of the action.

    This is nice and simple, but it does not include any mention of harm. There are two possibilities: (1) harm is defined only as the inverse of benefit; (2) utilitarianism doesn’t deal with harm. If (2) is true then utilitarianism is a bit useless, because people do suffer harm, and it makes up an important part of our ethical world.

    You seem to accept (1) when you say “I don’t see that utiltiarianism needs to distinguish (negative) utility from harm”. But I would argue precisely against that view. I’m not alone here: John Rawls phrased it, “Utilitarianism does not take seriously the distinction between persons”, and argued that negative consequences for person A cannot in general be compensated for by positive consequences for person B. Popper argued that politically, governments should draw the distinction, and that they should aim not for some positive ideal, but to minimize the harm they do to citizens.

    Utilitarians have mounted some stout defenses against this point, and I haven’t read enough of them to really make my mind up one way or the other. But it seems to me that the original point is one of the most important questions that utilitarians should face.

    On the second point, I think I’m suggesting a kind of ethics simpler than either your MA or MB. Both MA and MB explicitly call for comparisons of the moral quality of different actions (moral quality as measured by consequences here). However, it is possible to conceive of an ethics which only gives a qualitative answer, so the only possible comparison would be action a is “li”, action b is “not li”. It would be impossible to calculate degrees of li. Such an ethics would give no guidance on choosing between two options which both seemed to be li.

    Perhaps such an ethics would be so absurdly simple as to be unworkable; and perhaps it’s not what Mozi was doing. If there exists a passage in which Mozi compares actions saying action X would be li, but action Y would be more li, then he’s definitely not using a qualitative system.

    A qualitative system is an attempt to answer Manyul’s questions: it is a system that is both consequentialist and non-maximizing.

  33. The Mozi is about how to be excellent. Whatever the Mohists may have practiced, which would have been distorted by the demands of specific conditions, the core idea of the thinking put forward in the Mozi is meritocracy, not impartial concern. It’s about how to be morally superior (one element of which is the practice of impartial concern) and thus about how to deserve greater power. Much of it is also like a hall of mirrors: practicing meritocracy makes you more deserving and you practice meritocracy by promoting those who are more deserving, partly by judging their practice of meritocracy… But another consistent characteristic is scalability. While the greatest concern is for how to use political power, especially how to advise a ruler to rule if working for one as an advising official, the same principles apply to smaller microcosms, such as the family. The theme of an almost fractal like scalability suffuses the Mozi. The fact that scalable principles are mostly illustrated using examples of the largest possible scale tells us that the larger scale is more important.
    In fact, another central idea of the Mozi is jus this proportionality. More important things are more important. The service of humanity is more important than the service of self. The service of the state is more important than the service of the village, the service of the highest human authority is more imprortant than the service of the individual state, and the service of Heaven is more important than the service of even the higest human authority. This emphasis on the larger scale strongly implies that ideally the total results are one’s consideration in making decisions, but the fact that the Mozi is about how to be excellent assumes some will be less able to achieve highest excellence. In other words, the more excellent and thus powerful you are, under ideal conditions, the larger the scale you should be concerning yourself with. A sage king is concerned for the entire kingdom. The commander of a city’s defenses, should be concerned for the defense of the city. A soldier is concerned for his assigned duties. But on the other hand, the assumption is that those who seek to excell will concern themselves above their level. An ambitous soldier will look for ways to do his own duties but also will be concerned for the entire city. An ambitious general will be concerned not only for the one city she is defending, but for the entire state. In summary, the maximizing of benefit is not a hard, black and white, requirement, in which failing to concern oneself for the entire world in every decision is an immoral act, but rather it is a soft, analog, scaling requirement, in which more maximization is for extra credit.

  34. Hi Robert, thanks very much for this comment! I haven’t read the Mozi in recent years and never knew it well, but I have a question. You write:

    While the greatest concern is for how to use political power, especially how to advise a ruler to rule if working for one as an advising official, the same principles apply to smaller microcosms, such as the family. The theme of an almost fractal like scalability suffuses the Mozi.

    My question is about how the Mozi views the family as analogous to the state. The one snippet of the Mozi I do keep in mind and wave around is this bit (text and Mei’s translation taken here from the Chinese Text Project, http:ctext.org):

    Simplicity in Funerals III:
    Mozi said: The magnanimous ruler takes care of the empire, in the same way as a filial son takes care of his parents. But how does the filial son take care of his parents? If the parents are poor he would enrich them; if the parents have few people (descendants) he would increase them; if the members (of the family) are in confusion he would put them in order. Of course, in doing this he might find his energy insufficient, his means limited, or his knowledge inadequate. But he dare not allow any energy, learning, or means unused to serve his parents. Such are the three interests of the filial son in taking care of his parents. …
    子墨子言曰:「仁者之為天下度也,辟之無以異乎孝子之為親度也。今孝子之為親度也,將柰何哉?曰:『親貧則從事乎富之,人民寡則從事乎眾之,眾亂則從事乎治之。』當其於此也,亦有力不足,財不贍,智不智,然後己矣。無敢舍餘力,隱謀遺利,而不為親為之者矣。若三務 者,孝子之為親度也,既若此矣。 …

    The translator adds the term “ruler” near the beginning, not implausibly I guess. That suggests what might be thought of as an upside-down analogy between the state and the nuclear family. The good son is to his parents as the good emperor is to the world he rules.

    How would you square this passage with the idea that the Mozi sees the family and state as analogous especially in regard to the ruler-ruled relation and meritocracy? Indeed, what would meritocracy in the family look like (beyond the point that adult parents are presumptively superior to their non-adult children)? Would the good son as such deserve to rule in the family?

Leave a Reply to Bill Haines Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.