Don’t Suppress Your Intuition [CF Article]

1 Like

GOAL: React to article. Relate article to my personal thinking.

I found this article a fun and exciting read. I like the topic and find it relatable to my problem situation. I think live primarily by intuition and tradition but when I’m in learning mode I normally try to focus on explicit ideas. I have respect for intuitions because they often come from traditions (I think learned this from ET’s emotions essay Fallible Ideas – Emotions). I have been hesitant to share intuitions for a variety of reasons that I had not really considered until now.

One reason is that I have worried that just saying that I have an intuitive disagreement with something would sound like an unreasonable, or low-effort, or unimportant thing to share. So I guess I was thinking that it would be a bit of a non-starter in a discussion and not a very helpful thing to share or talk about. I think my biggest hesitation is throwing out comments on things that I haven’t thought much about. I think that I have been concerned that sharing statements of intuitive disagreement without some explicity detail is premature. My guess is that I can try resolving this problem but expressing my confusions and intuitive disagreements in whatever limited ways I can come up with. I’m happy to try changing my ideas about this and try sharing more about intuitions.

Another reason I have not shared intuitions much is that its hard for me to understand what my intuitions are without introspective work. More specifically, I think I tend to mask my intuitions with superficial rationalizations. I get confused and think that I really believe the rationalization but that rationalization will often fall apart with minimal introspection. I think this means my intuitions aren’t very stable, which is another reason I haven’t shared them much.

Article quote:

If people hide their intuitions and focus only on making explicit arguments that aren’t their real reasons (because they think explicit arguments are better and more rational, so they’re trying to be a better, more rational person than they actually are), then they’re being dishonest and sabotaging discussion.

The above quote is what made me think about how I could be rationalizing my intuitions and hiding my reasoning from myself.

Article quote:

The rational way to debate, when you can’t articulate all your ideas, is to say things like “I have an intuition which conflicts with that idea. I intuitively don’t want to believe or do that idea.” You should communicate the problem even if you can’t provide details.

I haven’t really considered how much to try to communicate thoughts like this but it sounds kinda liberating and definitely worth putting out there more.

Half-baked ideas are problematic, especially when unlabelled.

There’s not much point trying to respond to or correct an idea that the person won’t believe next week anyway even if you don’t reply.

Half-baked ideas are more commonly explicit ideas, not intuitions. Intuitions tend to be more stable, though sometimes you can mis-identify what is triggering the intuition.

E.g. you intuitively dislike a situation. You connect this to X, but it’s really that you dislike Y. Later, X changes and you still dislike it. Your intuition could look unstable but it isn’t.

Unstable short term intuitions tend to be created by stuff like biases, but being biased against something is generally a long term stable intuition. There is a stable intuition involved but some of the details are less stable and if you try to figure out the nature of the intuition you might get it wrong. In other words, your explicit analysis about what’s going on with the intuition may be a new, unstable idea. But if you keep your comments very simple, like “I intuitively dislike something about this whole situation.” then that will generally be stable, not something that would change next week. The fact that you react differently to some other situation wouldn’t contradict it. It could just mean that you haven’t yet figured out the pattern.

This is a good article. I think if people who think bad things about Elliot took the time to read his articles like this one with an open mind, they’d reconsider. It’s hard to miss his kindness here.

Gathering Data Points from Intuitions

Often, you can learn a lot about intuitions by asking questions (you can your debate partner can both contribute to this). If the situation was X, what side would your intuition take? What if the situation was Y? What about Z? Keep checking a bunch of different scenarios and giving your intuitive response to them and then you’ll be able to figure out in much more detail what the intuition says and wants. With trial and error, you can figure out what issues make a key difference that determine what side the intuition takes and which issues don’t matter much. This can help narrow down what your intuitive position is and enable discussing it and talking about upsides and downsides of it, problems it has, etc. And it can help you figure out the reason for the intuition.

This is an interesting idea, I don’t think it’s something I’ve tried like that before.

Mental Models of Intuitions

You can think of an intuition like a mini person inside you with some ideas and values, but who only communicates in limited ways, so you’re trying to figure out what they want and think. You can also think of an intuition like an idea, just like any other, which has some knowledge and reasoning – which isn’t in words but could be in words and is the same knowledge whether it’s in words or not. So you can view an intuition as wanting, valuing, saying or thinking something, just as you would talk about an explicit idea. That’s an approximation because it’s people who value, want, say or think things. But it’s understandable and we don’t have some well known and clearly better mental models to use. Our mental models tend to mix up people and people’s ideas some, which makes sense because ideas are the most important part of people.

This is something I have done, I try to ask the intuition questions or offer solutions and see how I feel in response.

I often find it useful to have the conversation out loud. I don’t fully understand why, but I think hearing my words out loud helps connect to the part of me that I’m struggling to communicate with.

Seeing the words typed or written is also helpful, but in my experience less so.

I would guess that this would vary from person to person depending on how they learned to think and how they formed the intuitions in the first place.

Mean People

A lot of people who are pretty good at explicit arguments are pretty mean. Let’s call them “rationalists”. They react negatively to intuitive disagreement. They believe intuitions are bad or irrational. They treat themselves that way and also treat others that way. Because this is widespread, most people are pretty reluctant to share their intuitions in debates, especially when they don’t have prior friendship or at least rapport with the person they’re talking with (and it’s even worse in group settings when there are many different people who could potentially say something mean).

I have definitely treated myself and others like this in the past. It’s something I have tried to stop doing. I don’t remember the last time I did.

Unless you’re extraordinarily good at explicit, conscious analysis, then you should talk about intuitions frequently in debates and discussions. Trying to put things into words more, or talk about things you can’t explain well, are common, normal parts of debates. Any social pressure against that is irrational and is suppressing progress.

I think coming here I expected the opposite to this (I think I expect some social pressure against speaking about intuitions as a baseline, and even more so when associated with serious discussion). I’m glad I read this and corrected my misconception.

Yeah I like this series (5 articles) too!

I didn’t always know this stuff. I don’t know when I learned it. I’m sure I mentioned pieces of it in discussions before, but I only wrote it down in good articles recently. At the time I started writing the articles, I already knew this stuff; I didn’t figure it out by writing the articles or right before.

Yeah “rational” forums tend to be pretty anti-intuition.

You may run into issues with that here too but the knowledge in the articles (and the ability to quote the articles) may help you stand up to it and get people to come around (who wouldn’t have on other forums with no articles). Or if someone doesn’t listen and continues with normal attitudes, it might not matter so much. If you are confident you have the moral high ground, then it can help you resist the social pressure, not feel bad, and not suppress your intuitions.

It’s also common to self-pressure over these things, sometimes based on unintended interpretations of what other people said or are thinking without saying. So hopefully the articles will help you and others avoid doing that. People have a lot of this stuff (like a conception of rationality that is hostile to intuition) internalized.

Like someone can say something on autopilot that hints at anti-intuition pressure, and then you can take the hint due to your own autopilot, and that can pretty easily happen even if both of you would say pro-intuition things if explicitly asked. Preventing that takes a bunch of practice to change your subconscious (another topic I have articles on the CF site about).

From Elliot’s article: [EDIT: badly presented quote]

If the situation was X, what side would your intuition take?

I tried this idea today.

Context: I exercise most mornings (I use the Home Gym app and I’m currently doing a 28 day full body plan). I started this two and a half weeks ago and have done the exercises almost every day.

Today after posting a few times I wanted to start my exercise. I had some sort of conflict with the idea but couldn’t tell why.

My mind was racing a bit from a long post I wrote and I guessed that was the reason. So I thought about being able to pause the exercise whenever I wanted if I wanted to think about my post some more. This didn’t solve my conflict.

Then I tried Elliot’s idea and considered an alternative situation that I just do the first exercise (which is always jumping jacks), and then decide whether to continue or not. This felt okay so I made a start (and continued through the exercises, lots of push-ups today!)

I guess my conflict was something like:

  • I wasn’t feeling confident about finishing and part of me would feel bad about not finishing it all
  • Part of me maybe felt coerced, like I’d consciously decided to do the whole thing but I’d sprung that on my subconscious all at once, kind of like if you’re doing an activity with someone else and they suddenly decide to change everything and start doing something else without talking about it
1 Like

I think ideas like this are really important to learn/create and are really powerful.

I think an idea like “if you’re reluctant to an activity just do the first bit and make a start” is very very conventional. It’s very similar to what I ended up doing.

But that’s a pretty parochial idea and doesn’t include any understanding of the reluctance itself. I didn’t just use that, I was thinking in terms of communicating with my subconscious, I was actually dealing with the root cause of the problem.

Elliot’s idea is much MUCH better than conventional ideas here, because it’s both a universal approach that may help with any sort of subconscious resistance, and it focuses on the underlying subconscious problem solving rather than only considering the practical side.

Here’s maybe a more fundamental way to look at it that my approach to intuition is partly coming from:

  1. finding decisive solutions, strictly better solutions or win/win solutions. *not compromises or win/lose solutions
  2. addressing all ideas, disagreements, arguments, sides, people, criticism, etc., instead of ignoring some

Related to those, two of CF’s main and original ideas are:

  1. decisive arguments, not weighted factors (which is very important for enabling you to actually resolve disagreements, reach win/win solutions, etc. – whereas weighted factors lend themselves to compromises)
  2. Paths Forward, which is about addressing all criticism instead of ignoring some (which risks staying wrong when better knowledge already exists, which is a much more avoidable error than staying wrong about something where better knowledge doesn’t currently exist)

Ideas about win/win solutions instead of compromises are found in previous ideas including Eli Goldratt, Ayn Rand and Taking Children Seriously. Probably elsewhere too but I don’t know the history (I’d be interested if anyone knows or researches it).

Also, my view of the subconscious (and practice, automatization mastery, etc.) is partly based on Objectivism. And some parts of it have existed in our culture for a while like Four stages of competence - Wikipedia (late 1960’s, but I’d guess similar ideas existed before that, like certainly the idea of practice is much older).

I’ve seen mention of the your win/win decisive solutions approach a few times (such as the podcast in How Better Debates Would Improve the World).

I think I an intuitive agreement with win/win decisive solutions, as yes I think I’m sure I’ve come across something similar from Ayn Rand. I can’t remember exactly which of her books I’d have found it in and I think it’s something I’ve internalised in the years since. I previously had a lot of reservations about using statistics in truth decisions anyway (and generally think people are super bad at interpreting the significance of statistics).

Because my prior understanding is intuitive it’s hard to say how much of your idea is new to me here (and it kinda just fits in with my intuition pretty effortlessly). I think the debate tables helped clarify the process to do it better (i.e. breaking a goal into absolute requirements and whether an idea solves those requirements). I think it’s something I need to use in practise to understand better.

Anyway I thought I’d mention that as it might be weird not to say something about it since it’s come up multiple times.

I can’t say the same for Paths Forward. It seems good to me (I’ve gone through the curi.us articles) but I have some sort of intuitive resistance to it. I read it and consciously think it’s good and I want to be rational, so I think I should follow it but my subconscious doesn’t want to, I guess I feel some sort of pressure/imposed obligation from it.

I think maybe because it opens up a lot of uncertainty about how much time I’d be committing to try to follow it. I don’t know how to estimate how much time it would take up, so my subconscious is creating some sort of disaster scenario where I’d have to spend all my time on it and end up in an overwhelm situation again.

I think the root thing I don’t know is: What are the criteria for ideas that I should have public Paths Forward for? I don’t think it can be everything, everyone has tons of ideas about all sorts of stuff (from subconscious ideas about the qualia they experience, to long complex economic ideas). I don’t know what ideas I have that I should build Paths Forward for.

From Curiosity – Paths Forward Summary

the way to deal with ALL ideas that disagree with you is you either 1) write a refutation or, most of the time, 2) refer to a refutation already written by someone else or you. (you must take responsibility for it. if it’s wrong you don’t just blame the author, if you used it and it’s wrong, then you were wrong).

I guess this is one kind of idea that should be included in a person’s Paths Forwards but I don’t know about other ideas. Even this I think needs some discretion with scale, some disagreements are very small/low-impact and could be a big burden to try to write up answers to all of them.

So I have some guesses:

  • Ideas that disagree with popular or high-impact ideas
  • Ideas that have significant impact (it could help a significant number of people)
  • Ideas about something new, some new way something that could be done (maybe without clear benefit, but could be a good new way to look at some problems)

Are you thinking of “win/win” and “decisive” solutions as two separate things or as one concept?

In CF, there is no such thing as low-impact criticism or errors.

An error is decisive – causes failure at a goal – or it’s nothing. If it doesn’t cause failure at a goal, it’s not a small error, it’s a non-error.

Every criticism that is decisive regarding a goal you’re pursuing should be of interest to you. Decisive means that if the criticism is true, then you will fail. Simply ignoring criticism like that is always a substantial risk which will have high impact if you’re wrong.

That really helps! If it’s goals-related that clarifies the sort of thing it’s important to apply it to.

Taking my project for an example (Project: Part 0: Considering major life choices). That’s a major goal-related thing (or at least, working out goals thing), so that seems like the thing that is important to build Paths Forward for. I think when I’ve completed that project it would be a good idea to write up my conclusions in a series of articles (or where relevant reference accepted answers).

So I’d guess that in principle it would still be good for someone to answer all criticisms of all their ideas, but if those ideas aren’t related to their goals then there’s no urgent need to do so.

All ideas are part of infinitely many true IGCs and infinitely many false IGCs. Even when limiting to the current context, goals can specify anything for success or failure criteria.

So besides considering criticisms related to your goals, it would make sense to consider answering criticism re common goals, or re goals other people actually have in good faith, but not all goals.

If you meant that some of your ideas aren’t related to your goals, so you don’t need to worry about criticism of them (though it’d be nice to), I disagree. In short, all your ideas have some sort of purpose (goal). You don’t have ideas for no reason.

You may have an idea that is in your head but which you’re not actively using right now, in which case you might deprioritize criticism of it. Some of your ideas are in much more active use than others. If you’re willing to respond to criticism of an idea with “yeah you might be right; I don’t know; I don’t think idea is relevant to what I’m doing with my life right now, but I should consider that first if I were ever to use this idea again” that is OK. Sometimes an idea may be currently relevant in ways you don’t recognize, so it can be important to debate whether it’s relevant if the critic thinks it is relevant. But if it’s just some idea you heard and believed 20 years ago and didn’t forget, but never practiced or automatized and it has nothing to do with your current field, then maybe fixing errors in it doesn’t really matter now.

Another way to approach it is to say you want to focus on bottlenecks in the way of your main goals, not on optimizing anything. If some issue would not be a priority even if the critic is right, that’s fine. He can criticize your priorities, explain why actually it affects your current priorities in a way you weren’t aware of, or drop it.

I may have been lumping them in together, that doesn’t seem right now since you asked.

I have no reservations about agreeing with the “decisive” part. I don’t think I have the same certainty with win/win (not necessarily disagreement, maybe some lack of clarity).

So I found this:

From article:

Suppose you have conflicting ideas X and Y. Then you can decide: “this would take too long to sort out whether X or Y is better. so I will just do Z right away b/c it’s not worth optimizing”. Z can be a win/win.

I’m using this example to testing myself on the win/win concept.
So: A (the pro-X faction) and B (the pro-Y faction) want different things. A wants X (and I’m inferring not-Y) and B wants Y (and inferring again not-X). It may be that X and Y are exclusive of each other in this example (e.g. the situation is “what to do for the next hour?” where an answer of doing two things each for an hour is impossible).
So doing X (or Y) would involve suppressing either B (or A) respectively. Either A or B would be unhappy in this situation.
So coming up with Z which both A and B would be happy with is a win-win for A and B. X and Y are both win-lose situations.

I have some conflict with pursuing win/win. I’m going to come up with some answers to it.
Q: What if A is exclusively focused on X and doesn’t want anything else?
A: B doesn’t want to do X, does A want to coerce B to do X? Does A hate B? Why is it important to do X with B if it will make B unhappy? It’s unreasonable to want to do something that requires B to take part but B doesn’t want to do.

Q: What if A (or B) doesn’t really like Z much, they just dislike it less than Y (or X)? Then both A and B are doing something that they both dislike.
A: Then they can try to come up with different ideas, or work out how they can both like Z (or even X or Y) together.

Q: A and B don’t want to/have time to come up with lots of ideas, they want to/need to start doing something right away.
A: if Z is something they both dislike less rather than something they both like, isn’t that still a better option than either of them doing something they really dislike?

Q: It is literally impossible to do anything but X or Y and there’s no chance to come up with other ideas, either B or A must be unhappy.
A: I don’t think win/win is about making decisions in emergency situations.

I guess the answer broadly to the reservations is: A and B value each others’ well-being, they’re doing stuff together after all. If they hate each other and don’t want each other to have a good time they shouldn’t be doing stuff together in the first place.

Whether there are inherent conflicts of interest between different people – who don’t especially value the well-being of the other – is an old political philosophy question.

Rand wrote The “Conflicts” of Men’s interests arguing there aren’t inherent conflicts that prevent social harmony.

Mises wrote about it too. Bastiat wrote about economic harmonies. I have a “Harmony of Interests” section in Liberalism: Reason, Peace and Property · Elliot Temple

Another way to view it is there’s no inherent conflict between society and the individual (at least that’s the classical liberal view that I espouse). We don’t have to choose whether the group or individual wins – they can both win. (Or two individuals, or two groups, can both win.)

Or put another way, there’s no inherent need for war. Peace and social harmony are possible rather than basically prohibited by incentives and by, for some to prosper, the logical requirement that there be an exploited underclass which will want to rebel when it can.

This is all pretty opposite to the Marxist class warfare view about society as special interest groups/classes that push for a larger slice of the pie for themselves and the best we can do is find tolerable compromises between who gets how much.


If it works between strangers, it also can work between parts of one person since strangers is basically a harder case and lots of the same reasoning can be borrowed from that case.

The idea of finding mutual benefit in more individual-oriented problem solving (e.g. between family members or friends or parts of one person) I think may have come after the political philosophy idea but I don’t really know.

There are solutions for emergency situations but I don’t think you should worry about those cases until after having a good understanding of regular cases.

1 Like

it’s been a while since I read The Virtue of Selfishness. I guess that’s where my existing intuitions about decisive win/win grew from.

Page 47:

Only an irrationalist (or mystic or subjectivist—in which category I place
all those who regard faith, feelings or desires as man’s standard of value)
exists in a perpetual conflict of “interests.” Not only do his alleged interests
clash with those of other men, but they clash also with one another.

I think this is a very important part that I had been trying to think of the words for before. I think most if not all disagreements with win/win I’ve seen come in some form of feelings or desires.

Most people don’t seem very interested in introspection about those feelings or desires. Very often, if they did they’d find out their desire places a demand on others that they most often wouldn’t admit to or would try to evade responsibility for. I think there are a lot of anti-rational excuses about feelings and desires (such as they’re “innate” or “instinctive”) that can keep people from that introspection and changing their mind even if they realise their feelings require unreasonable demands of others.