Morality without Foundations [CF Article]

1 Like

This is one of my favorite things I’ve read ever.

I agree with what I take to be the main idea here, which is that that

  1. A moral foundation based on (say) maximizing the amount of squirrels would entail that we should want freedom, capitalism, science, and most other good things.
  2. The fact that it’s squirrel-maximization specifically at the foundation of this morality isn’t important, and maximizing almost anything else would lead to the same conclusions (as long as it’s hard to maximize).

However, I took issue with the statement in Morality without Foundations that

[…] the foundations of morality don’t matter much […] as long as there is a strong or hard goal.

and a similar statement, at the end of the morality dialog Curiosity – Morality, that

[…] what specifically we want is not critical: the way to get it will be […] valuing science, freedom, wealth, and roughly the same things we value today.

My problem is this: if “what we want” is altruism, then the argument Elliot is making doesn’t apply. Altruism passes the qualification of being a “strong or hard goal,” and it is an important counterexample to consider (i.e. it’s not a pathological/technical/silly) because altruistic ideas are prevalent in the culture.

Why doesn’t Elliot’s argument apply to altruism? Because altruism (at least the way I’m defining it here) holds sacrifice as the moral ideal. That is, it’s a morality in which the most virtuous thing for a person to do is to exchange values for non-values, or at least to exchange greater values for lesser values. In every act of sacrifice, value is lost on net, and the more faithfully the morality of sacrifice is adhered to by a society, the more net value is lost. Therefore, an altruistic morality cannot entail e.g. capitalism and freedom, which create value without bound.

Although there’s a lot of fungibility in what we use as a moral foundation, it does matter to some degree: a moral foundation (like altruism) which holds the destruction of values as an ideal is still irredeemably bad.


I still think that the idea of morality without foundations is true if properly qualified. I would rephrase ET’s idea as:

A. Any morality that says you should pursue values will entail [insert almost any true moral fact here], and it doesn’t really matter which specific values are being pursued, as long as those values are “a strong or hard goal.”

But then I would add that

B. A morality (like altruism) which says that you should destroy values is self-defeating and impotent, so in some sense it can be ignored, just like the “trivial” moralities that say things like “create exactly 3 squirrels.”

[Indeed, an anti-values morality is impotent because it destroys value, and an anti-values morality is self-defeating because it’s subject to a arguments like the following:

  1. (Premise) Anti-values morality X is the moral foundation, therefore
  2. The destruction of values is the moral ideal, therefore
  3. The destruction of values is a value, therefore
  4. Destroying the destruction of values is a value, therefore
  5. Destroying the morality X is a value

So the conclusion (5) almost contradicts the premise (1).]


Thoughts?

I’m not convinced by this, because you can sacrifice more for others if you e.g.:

  • invent immortality
  • colonize the galaxy with trillions of people (and perhaps meet some intelligent aliens that you can sacrifice for)

You’d also want to learn philosophy super well so that you could best convince other people to be altruists.

The expectation value of your total sacrifices is lower if you start sacrificing early and accept a greater risk dying, humanity going extinct, etc. Or put another way, you can sacrifice more if you keep the sacrifices within your budget instead of sacrificing more than you can afford. And if you take that seriously and think long term – and not only about your own personal sacrifices but those of people who live after you (in case immortality is not invented during your lifetime) – then you’ll want to do things like invest all your budget into increasing your budget before you start consuming wealth on sacrifices.

Altruism strikes me as a strong, hard and bad goal which would be most effectively implemented towards the end, after we had a million years to power up and change our mind.

Since all the stuff we’d do along the way would be very contrary to altruism, I imagine either the altruists would change their mind or would disagree with the more short term plan of capitalism, science, individualism, etc., in order to get wealthy, immortal, more people, etc. In other words, I think altruists might claim that we need more sacrifices and socialism in order to become wealthy and immortal, and that capitalism won’t get us there, so we could end up in a debate over the means to the ends.

Squirrel fans could also disagree about what means will lead to wealth, technology, etc., but there’s nothing anti-capitalist or anti-science directly built into that particular worldview (which is just a toy example that doesn’t actually exist).

If altruists would agree to focus debates on the question “What means are most effective to generating wealth?” or anything similar, I think that could help (though if they were willing to have a reasonable conversation, maybe discussing the motivation for their moral ideal directly, and alternatives to it, would be more effective). It’d be easier to debate with them if they took seriously the concept of working towards goals like immortality and colonizing more galaxies and would discuss what means are most effective for achieving. Then they’d have to set aside some of their ideas bad about justice and fairness in the short term or connect them to effectiveness. In practice, I fear they (along with almost everyone) have fairly weak, squishy, second-handed and contradictory goals that they don’t really try to pursue in a strong maximization way. So while I like the thought experiment as a way to consider philosophy concepts, I don’t think it’s very suited for changing people’s minds. They have a lot of other more pressing problems.

1 Like

I think maybe altruism needs to be better defined.

Altruism could be pretty easy (along one definition, at least). Like I value having my iPhone more than I value a bum having it, but as an act I could easily give my iPhone to a bum - that’s not hard. And if I had different ideas about morality maybe I would. So in that sense I think altruism is kind of easy (it’s hard to do it consistently as a lifestyle but doing some altruistic acts isn’t very hard). If altruism is defined in such a way as “just give up whatever values you have on hand for lesser ones, right now gogo” then that’s not a very hard goal.

OTOH I can’t commit a very large act of altruism - like giving a billion dollars to something as an altruistic act - because I don’t have a billion dollars to give. So if you defined altruism in such a way that it could take account of the magnitude of the altruistic act when making judgments about how virtuous the altruistic act was, then I think the squirrel morality stuff still applies (if the goal is to sacrifice values, we want to maximize the values we have on hand to sacrifice, which will involve learning a bunch of stuff that’s not directly about our goal.)

So it seems like either on the one hand you define altruism in a way where it’s not a hard goal and so it’s kinda trivial to achieve, or on the other hand you define it in such a way where it is a hard goal and so you have to learn a bunch of stuff in order to achieve it.

BTW, I did consider value systems that want to destroy things, which I think is different than altruism. Altruism wouldn’t be satisfied if we were all dead. But if you have a goal like all intelligent beings are dead (or squirrel minimization), you better get control over the whole universe and make sure there are no intelligences in it before you mass-suicide. And you better research wormholes to other dimensions in the multiverse or whatever too, in case there are intelligences there that you could do something about. And you better not just mass-suicide but make sure to destroy all the planets and stars in a way that evolution won’t create new intelligent life. And you better make sure no one hides from the mass suicide, which takes a lot of power and wealth to do super super effectively/reliably. So I think even destructive systems like that could lead to a powering up phase (as a logical matter, though I don’t think their advocates would listen – EDIT not that ideas like that really even have have consistent advocates).

1 Like

Sure. I’ll do this first, because I think it will dispel Elliot’s criticism. I wasn’t just making up the definition I used above, I was trying to use the term in the same way as Rand, e.g.:

The irreducible primary of altruism, the basic absolute, is self-sacrifice [.]

(Source —Ayn Rand Lexicon)

“Sacrifice” is the surrender of a greater value for the sake of a lesser one or of a nonvalue.

(Source —Ayn Rand Lexicon)

Importantly, this definition of altruism differs significantly from the way that e.g. the “effective altruism” (EA) movement uses the term, which is also the way that Elliot seems to be using the term. For EA, your duty is to maximize the utility function, and it doesn’t matter whether or not you need to make a sacrifice in order to do so. In fact, if you do need to make a sacrifice, that’s minus points for the utility function: all things being equal, EA would want you not to make the sacrifice.

[By the way, Rand defined it the way she did for a good reason. The concept Rand calls altruism is much more prevalent in the culture than the concept EA calls altruism. Indeed, ask yourself this: Who do most people consider to be more of a moral idol, Bill Gates or Mother Theresa? Obviously the latter, even though the EA answer would clearly be Bill Gates. This is because Mother Theresa is more altruistic in Rand’s sense of the word—and therefore a better person, according to the prevailing morality.]

Anyhow, I agree that the argument of Curiosity – Morality does work for EA’s concept of altruism, because the EAs are pursuing a value. It doesn’t work for Rand’s concept of altruism, though.


[EDITED] I basically agree. Ironically, in the way I’m thinking about it, I would treat the goal of intelligent being minimization as a hard-to-achieve, “value”-pursuing morality rather than a value-destroying morality. Its ultimate goal is to destroy the things that I think of as values, but it does encourage the pursuit of something that it thinks of as values.


Yes, I agree that Randian altruism is trivial to achieve if it is identified explicitly and taken to its logical conclusion. Or rather, it would require a lot of willpower, but it is trivial to achieve in the sense that if you’ve decided to be an altruist then you don’t need to create any new knowledge in order to do it.

Even though most people implicitly hold altruism as a moral ideal, no one ever takes the idea all the way to its conclusion. A part of them wants to pursue values, and so they do something selfish (e.g. they skip going to see grandma this weekend). Then their implicitly-held altruism tells them that they should feel guilty for not sacrificing. Then they try to feel better by evading thinking about their “misdeed,” and/or by making a sacrifice in the future (e.g. going to see grandma next weekend).

What you present above from Rand isn’t actually a good definition of altruism. It’s identifying the importance of self-sacrifice to altruism and then going into detail on what self-sacrifice is.

In your earlier post you said:

Because altruism (at least the way I’m defining it here) holds sacrifice as the moral ideal. That is, it’s a morality in which the most virtuous thing for a person to do is to exchange values for non-values, or at least to exchange greater values for lesser values. In every act of sacrifice, value is lost on net, and the more faithfully the morality of sacrifice is adhered to by a society, the more net value is lost. Therefore, an altruistic morality cannot entail e.g. capitalism and freedom, which create value without bound.

On the Ayn Rand Lexicon page for altruism, the first quote begins:

What is the moral code of altruism? The basic principle of altruism is that man has no right to exist for his own sake, that service to others is the only justification of his existence, and that self-sacrifice is his highest moral duty, virtue and value.

(bold added)

So I think the idea of serving others is an important thing here that you’re not giving much attention. But I think it’s part of what could anchor altruism to reality somewhat.

If I value having my iPhone more than I value whatever catharsis I might get from throwing it at a wall and smashing it, then throwing it at a wall and smashing it would be a sacrifice, but it would not be altruism, because nobody was helped by it (note the contrast with the example I gave earlier about giving my iPhone to the bum). So destroying the iPhone would be sacrifice but not an altruistic one - it’s just destruction.

So anyways my point is, if you start thinking about helping others in your sacrifice, then that can bring up the issue of, okay, how do we do that effectively. And so that brings up the squirrel morality stuff more.

FWIW, re the Rand bit I quote above, I think her wording is very strong and might turn people off just from that (“no right to exist”, “only justification”) but that plenty of EA people would agree with what they might take as some of the sentiment - that service to others is supremely important and the highest moral calling/good.

Yeah, good point, thanks. Not just any sacrifice is morally virtuous for mainstream altruism. Indeed, I think it would say that destroying your iPhone is morally bad.


I’ve realized the nature of my error. There are two distinct ideas in mainstream morality, and I was getting them confused. They are:

  1. The idea that it is your moral duty to live your life for sake of others (altruism)
  2. The idea that an action which is moral is only of moral import if it is a sacrifice

Point (2) needs some explanation. Here’s Kant articulating idea (2):

[I]t is a duty to preserve one’s life, and moreover everyone has a direct inclination to do so. But for that reason the often anxious care which most men take of it has no intrinsic worth, and the maxim of doing so has no moral import. They preserve their lives according to duty, but not from duty. But if adversities and hopeless sorrow completely take away the relish for life, if an unfortunate man, strong in soul, is indignant rather than despondent or dejected over his fate and wishes for death, and yet preserves his life without loving it and from neither inclination nor fear but from duty—then his maxim has a moral import.

(Taken from some translation of Religion Within the Bounds of Bare Reason, but I found the quote in Ominous Parallels).

Note that if you do your duty and your duty conflicts with what’s in your self-interest, then (and only then) you have made a sacrifice. In other words, your duty is of moral import iff you have made a sacrifice, hence what I said in (2).

Note also that (2) is very different from the (wrong) idea I was stating in my original post that “sacrifice [is] the moral ideal.” It just says that if an action you take is something you want to do, then you deserve no moral praise taking that action.


I think that part of the reason I was confused is that (1) and (2) are intertwined. The concept of (2) is natural given (1), but (2) is unnecessary in objectivism because an ideal objectivist’s moral duty (his self-interest) lines up perfectly with his desires.

Another reason I think I was confused is that:

The world is in a better state → there’s more value being created → there’s less sacrifice happening → there are fewer actions being taken that are of “moral” import.

So the better the goal of EA is being achieved, the less morally praiseworthy action is happening. Or equivalently, the more “morally praiseworthy” action is happening, the worse the state of the world becomes. This situation is not literally contradictory, but it’s quite paradoxical (which I’d guess is a big part of why mainstream morality looks so different from EA).

It seems to me like there are some “closed” deontological moral systems which look like the following:

  1. Our moral duties are fulfilled completely if we just follow the rules.
  2. We already infallibly know the rules because they came from divine revelation (or Platonic intuition or whatever).

I don’t know what argument could possibly convince a follower of such a morality to adopt science, so it seems like Elliot would have to classify such moral systems as “trivial moralities.”

I guess I can see how such systems are trivial: The moral goals are always “easy” to achieve, because they aren’t bound to knowledge creation in any way.

Right, cuz on the altruist premises, I COULD have given it away instead of smashing it, so it was a pointless, selfish act. It’s like worse than wasting food when there’s hungry people or whatever.
(And you know, even on MY premises, which are not altruistic, pointlessly smashing an iPhone for no purpose or cuz i’m angry is worse than giving it away to some random bum, so the altruists are not totally wrong here…)

1 Like

I’m not sure I quite follow the Kant quote.

What is the moral significance of being “indignant rather than despondent or dejected over his fate”?

Are you thinking that for the person where “adversities and hopeless sorrow completely take away the relish for life”, and who “is indignant rather than despondent or dejected over his fate and wishes for death”, it would be in his interest to kill himself, but he doesn’t out of duty, so he’s making a sacrifice?

If so, I think a clarification may be in order. I think that there is actual self interest and perceived self interest, and sometimes they line up but sometimes they don’t. A simple example is that a person may genuinely perceive investing in some sketchy crypto currency whatever to be in their financial self-interest, and be totally wrong about that. They’re motivated by self-interested goals but are mistaken about the means of achieving those goals.

I think that in many typical cases, someone who is despondent or whatever wants to be happy, but has had some tragic circumstances or problems that have caused them to give up. So they may see suicide as the best available option. But if they learned stoicism or whatever, they might learn to think about their circumstances in a different light and be happy with their life. So I think I disagree with the example.

I guess I was interpreting this as describing a situation where it’s in the actual self-interest of a man to kill himself, but he doesn’t.

Yeah, it’s a weird example because it’s quite rare that it’s truly in the self-interest of a man to kill himself. The man would have to be in a concentration camp or something.

Mhm, self-interest =/= momentary desire. If you act against your desires it’s not necessarily a sacrifice, but if you knowingly act against your self-interest it is by definition a sacrifice.

E.g. it’s not a sacrifice to act against your desire to play video games by studying for tomorrow’s exam instead, but it is a sacrifice to give up your dream job and go take care of your sick mother. The former is of no “moral import,” but the latter is.

By the way, this distinction is what I was talking about when I said that the concept of “moral import”