Whether morality is primarily about social/interpersonal stuff or about dealing with reality effectively

Excluding the parenthetical, I agree. I don’t think the point of morality is to avoid harming ppl, tho (sometimes the point is to avoid harm to yourself – e.g. self defense). I do think that all moral topics have something to do with harm. IDK if there’s a good example of a moral topic that has nothing to do with harm.

Can we pause for a moment and take stock? I’ve just updated the convo tree.

OK

One thing that occurs to me is that the tree is v bottom heavy (or mb lopsided on the side that wasn’t intended to be the focus). Should we keep focusing on what we’ve been focusing on?

I think these are worth talking about:

(convo tree node fragment: “J - q: what’s the role of morality in Galt quote?”)

(convo tree node fragment: “J - confused”)

I also have a saved reply to: (convo tree node fragment: “J - M’s next clause”)

What do you think is best to focus on?

They seem all related to me, so mb there’s a way to cover them all at once – but mb too much to chew in one bite.

Mb the finding food node is good way to start that, tho, since it’s more focused than try to describe a big general thing like what topics morality relates to and the intersection I mention.

I think this seems like an important point.

Based on more recent stuff you said, I think you have the idea that morality can always be connected to harm in some way? Like, morality comes into play when driving a car because if you act irresponsibly and immorally (drive drunk, distracted, whatever) you might harm yourself and others - is that an example of your view?

This view doesn’t actually match what you said in your blog post:

(Sometimes those projects/decisions only impact a single person, in which case they’re probably amoral – that’s okay. Morality is about interpersonal harm, and people are free to live their own lives (wrt themselves) how they wish. People are also under no obligation to maximize morality – just to not act immorally – so provided there’s no malice, most goals are okay.)

How would it even be possible for a project/decision to only impact a single person if you are defining your future self and your current self as different people?

Also, from in the first sentence of that quote, it seems like you are saying that decisions that only impact a single person are probably amoral. I took that to mean that you think that morality doesn’t apply to decisions that only impact a single person.

1 Like

I think it’s possible when it concerns a short time period (e.g. the next two hours before bed) and more trivial things (like should I watch an episode of south park). Lots of stuff is like that. Lots of stuff isn’t, too, like deciding what to focus on for your career. Important decisions mostly involve one’s future self, and trivial decisions mostly involve one’s current self.

(Note: After writing this, I’m not as sure about it – like it seems more like even trivial stuff involves your future self more than I originally considered, e.g., watching south park vs ellen or something.)

[META]

My plan is to focus on replying to this node.

I might make other replies (like this one to ingracke) if they seem short / easy where I can address a branch early.[1]


  1. note: saying this, I’m not so sure about it – like conversation branches aren’t like real tree branches: they don’t grow on their own. what’s the urgency? ↩︎

If you think your future self and your current self are different people, and you define “current self” as you in the current moment, then any decision could only ever affect a future self (and possibly others), and could never affect your current self (which I think was the point of @ingracke’s question).

Based on your reply, it seems like you have some idea of what counts as “future self” that’s different than “yourself at any point in the future (as opposed to yourself in the current moment)”, but I’m not sure how you’re thinking about it exactly.

I think the harm stuff is deeply/foundationally connected to morality; it converges in some sense/way.

One of the reasons for that is thinking about questions like when is someone able to make moral decisions, and when aren’t they able? In that case, my next thought is like if you can’t do harm (e.g., because you’re freedom/autonomy has been taken away), how can any of those decisions involve morality? If someone’s freedom has been taken away, there could still be some moral decisions they could make, and mostly it seems like those decisions would involve morality b/c they are about that person’s own future (e.g., getting out of that situation, mb with a lot of preparation and things). Also, if a decision can result in harm (to you or someone else), how can morality not be involved? It seems like these are exactly the sorts of choices that do involve morality.

Yes.
Note: That’s not to say that all dangerous driving is immoral (EMS drive dangerously in time critical circumstances, tho precautions are taken, too). But in general, because driving can be done dangerously (thus potentailly causes harm), the choice of how to drive is a moral decision.


I think I’m convinced now that the choice to use the word “interpersonal” was a bad one. I haven’t really changed my mind about anything else, though.

[meta]

This isn’t actually quoting anything, right? (Searching the topic didn’t show anything)

oh yeah, NOT A QUOTE

I think decisions matter to different degrees, and have different like scopes of effect. Particularly, some decisions (like what to have for lunch) have a local effect – there’s no reason to think that it will have a substantial effect on your future. It’s qualitatively different to decisions that we know will have a substantial effect on your future. Those sort of local decisions are the ones that only really involve your current self. Put another way: since there’s no real impact on your future self (salad vs pasta won’t change your life much), those decisions don’t involve your future self. Caveat: habits around these decisions might affect your future self. Like if you choose KFC for lunch every day, that could be a problem.

So the definition of current/future self in this case depends on whatever breakpoint we choose regarding like magnitude and scope of the impact of choices. WRT decision making, that’s the qualitative difference between current/future self.

If future versions of you are different people, then this isn’t a coherent view. Multiple different versions of me wrote this message. There are multiple future versions of myself that will exist in the next two hours. There are multiple future versions that will exist in the next second.

And even looking at the next two hours before bed, there are lots of decisions I make that are going to affect a non-trivial version of future me. Like, should I do the last things on my daily todo list now, before I watch a movie, or should I wait until after I watch the movie and leave them for my more tired future self?

I don’t think you guys are focused well re goals and constraints.

I’d suggest making a discussion tree with only important nodes which people explicitly ask to add to the tree because they see a clear purpose to the node. Putting ~all messages in a tree (in some cases, i think there are even multiple nodes per message) adds a lot of clutter.

It’s also important to consider broader context. Why does this discussion matter a lot? Why talk about it over something else? What value are you trying to get from it? That can help guide actions within the discussion and avoid local optima.

I don’t think you are actually making a serious effort with this issue. You aren’t actually trying to define your terms in a reasonable way or take your own words seriously.

It reads like you are making ad hoc arguments to try to defend your original post.

You originally said “Morality is about interpersonal harm”. So to try to defend that point of view, you decided to define your future self as a different person so that you can define decisions that only harm the decision-maker as being about “interpersonal harm” because now there is more than one person involved.

But taking that view seriously leads to not having any decisions that only affect a single person. So then you try to put arbitrary limits on when or how often your future selves can actually be considered different people, so that you aren’t contradicting the other things you said.

This doesn’t read like you had a serious, considered point of view that you are trying to explain. It reads like you contradicted yourself, and now you are making up ad hoc defences.

And the original blog post (comment? nested self-reply) itself appeared to be ad hoc rationalizations re failure at a goal. They seemed to be focused on local optima like saving face, without considering global impacts like contradicting CF or Rand.

Now, again, I’m seeing statements that appear to be convenient for an immediate/narrow/local purpose, but which have unintended, problematic implications about the bigger picture.

self-reply – you can see the full microblog node + tree structure w/ posts here: https://xk.io/n/2380/as:topic. (it’s chronological so it’s near the bottom)

I disagree about the serious effort. I think that’s a fair reading, though.

I do do that sort of thing sometimes, but I think there are qualitative differences between those times and this one. I’m not saying that my definitions are good, or adequate, or appropriately serious, but I have considered future-people and morality a lot. It’s how I think about things like abortion vs murder-feticide. It’s also how I think about big things like the future of civilization. Regardless of whether those are good topics for me to think about, or not, my ideas about future-people can’t be entirely ad hoc.

I’ve also been conscious of this sort of thing in earlier posts – like I thought about whether I was retconning my argument or not. (Maybe that’s a really bad thing and I should have mentioned it – like was it skipping over a problem that was a blocker for this conversation.) The conclusion I came to is that the best thing to focus on is the thing that is going to lead to a successful discussion – and that’s bigger than my blog post, it’s some foundational ideas about morality.


Yes.
Though I decided that future-me is a different person long before I wrote the blog post – which is why I’m still talking about future people after saying I’m convinced that ‘interpersonal’ was a bad word to use:


I don’t think the limits I said are arbitrary.

[meta]

I feel like I’m in a bit of a catch-22 atm.
on the one hand, my posts seem ad hoc (note: I did bring up future-people in my first post, deliberately, b/c I knew that might be a topic where Justin and I weren’t aligned)
and on the other hand, I could have written a lot more about my ideas on morality up front, but I deliberately have been trying not to preemptively write a massive post. If I had written a bunch upfront, it’s that worthy of criticism?

but – what’s the goal of this conversation? Up till recently, I thought it was about whether my ideas about morality are wrong/problematic or not. talking about this stuff feels like it’s in conflict w/ that goal. we could stop talking about morality and talk about ad-hocness, but I don’t see how avoiding the appearance of ad-hoc responses would help me figure out the actual problems with my actual ideas about morality. Like, I still don’t see a decisive conflict w/ my ideas and Rand’s.