Applying Yes or No Philosophy

Topic Summary:
Applying Yes or No Philosophy.

Goal:
I want to understand Yes or No Philosophy better so that I can apply it to my problems.

Why are you posting this in Unbounded?
I was considering putting it in the Elliot Temple section, but I decided to put it here because I have more than one confusion (more than just what I describe in the post below). I will probably post more stuff here later.

Do you want unbounded criticism? (A criticism is a reason that an idea decisively fails at a goal. Criticism can be about anything relevant to goal success, including methods, meta, context or tangents. If you think a line of discussion isn’t worth focusing attention on, that is a disagreement with the person who posted it, which can be discussed.)
Yes.

Decisively refuting solipsism with Yes/No Philosophy

Here, I will define solipsism to be the theory that is just like the “common sense” theory of reality, except that it tacks on an unexplained claim that everything is really a part of my dream. In particular, solipsism claims that there are things external to my conscious mind that “kick back,” that these things obey the laws of physics, and that some of these things act exactly like human beings.

There are easy criticisms of solipsism that could be made, like “what does it even mean that everything is a part of your dream?” or “why do you say that everything is a part of your dream?”, to which a solipsist has no good answer. In the context of Yes or No Philosophy, it should certainly be the case that criticisms of that sort are considered to be decisive.

However, ET defines decisive criticism in the following manner

A decisive criticism says why an idea doesn’t work (for a purpose, in a context). In other words, it points out that an idea is an error and therefore will result in failure (unless the criticism is mistaken) at a particular goal or purpose. A decisive criticism is incompatible with the idea it criticizes: you can’t (rationally or reasonably) accept the criticism and also accept the idea.

What I’m confused about is this: What is the “goal or purpose” that solipsism decisively fails to achieve? My issue is that for any practical goal I might want to achieve, solipsism doesn’t clash with meeting it, because solipsism makes the exact same predictions about physics as the “common sense” theory. For example, solipsism is just as good as the common sense theory when it comes to meeting goals like driving to the grocery store, or sending a rover to Mars.

It should be clear that in the above, there’s nothing special about solipsism in particular. Indeed, many of the of the “bad explanations” that David Deutsch discusses at length in his books (like the Greek myth of weather, the Copenhagen Interpretation, or the Inquisition’s theory of astronomy) would be similarly difficult for me to decisively refute.


My non-solution to this problem:

The criticisms of solipsism are decisive with respect to the goal of “having a theory of metaphysics with a minimal amount of unexplained baggage,” or something like that.

This isn’t a satisfying answer at all, because it’s not clear why it should be my goal, and it’s extremely vague.

Good question.

The goal of solipsism is something like: provide a model of reality which helps us understand the world we live in and accounts for our observation data.

That, notably, is also the goal of realism. Solipsism is a rival theory to realism – an alternative approach to the same problem/goal.

Solipsism says life is a bit like playing Grand Theft Auto but the cops are programmed with better AI, and the physics make injuries matter more, so it’s harder to get away with crimes.

This viewpoint is impactful, conceptually, when thinking about things like moral philosophy. (Murdering the “people” in GTA basically doesn’t matter and is normal, because they are not actually people.)

We also have ongoing background goals in our life and thinking. We can take those and explicitly add them into some other goal when they’re relevant, or just remember to consider them separately when they come up.

An example of a background goal is “don’t get seriously injured”. I could explicitly add this into my “go to the park” goal, but generally I won’t. I’ll just notice if a proposed travel plan would seriously injure me and bring up the background goal at that point.

Another background goal people have is something like “elegant theories”. A more specific version of this is “no unnecessary parts to solutions that don’t actually help with the goal, because then you have purposeless stuff that can’t even be evaluated for success or failure because the purpose is not specified”. In other words, all our ideas should be purposeful. In other words, if a solution has two parts, and part A is a full solution, and part B is fully unnecessary, then you should wonder what the purpose of B is. Why do you want to include it instead of doing just A? What is it for? If we figure out what goal it’s for then we could evaluate it; without that we shouldn’t accept/use/include it because there is no reason or purpose to doing that.

For example, say I want to walk to a park that is 1 mile north.

Plan 1: Walk 1 mile north.
Plan 2: Walk .5 miles north, walk 1 mile west, walk 1 mile east, walk .5 miles north.

Both plans will get me to the park, but one has a fully unnecessary detour. I should reject plan 2 unless I have some purpose/goal for the detour part. The detour part does not help with the goal of getting to the park 1 mile to the north (you could invent some special context where it does, but let’s assume it doesn’t).

Solipsism is (in the viewpoint from Deutsch’s The Fabric of Reality) like the detour. It takes a viable solution and then tries to piggyback on that by adding stuff that doesn’t actually help with the original goal.

I think that’s about right. That’s a good goal because we want ideas to be both purposeful (aimed at a goal/purpose – aka for something) and effective at their purpose. Otherwise what’s the point of them? Part of this involves looking at multi-part ideas and checking if all the parts have some purpose or some could be removed because they don’t actually contribute to the purpose. Extra baggage is usually harmful because of having wrong consequences or practical downsides, but if it’s very carefully designed to barely matter or make zero difference, it’s still harmful because it wastes mental resources: it’s extra stuff to remember and think about that clutters up our worldview.

1 Like

Okay, I think I get it. I’ll try to phrase the solution in my own words below.


Given a solipsism-like theory T, either

  1. T comes with some practical recommendations for how to meet our goals, or
  2. T doesn’t come with any such practical recommendations.

In case 1, we can refute T if it gives recommendations that decisively fail at meeting our goals.

In case 2, since (by assumption) common sense realism has the same practical content as T, we can say that T decisively fails to meet the meta-goal of not wasting any more mental effort than is actually required to meet our goals, and this is obviously a good meta-goal to have.

A fun side remark re. decisive criticism:

I was reminded of this funny scene in Atlas Shrugged where Dagny and Rearden are looking for the 20th Century Motor company, and they ask a man in Starnesville for instructions.

“Listen,” said Rearden, “can you tell us whether there’s a road to the factory?”

“There’s plenty of roads.”

“Is there one that a car can take?”

“I guess so.”

“Which one?”

The man weighed the problem earnestly for some moments. “Well, now, if you turn to the left by the schoolhouse,” he said, “and go on til you come to the crooked oak, there’s a road up there that’s fine when it don’t rain for a couple of weeks.”

“When did it rain last?”

“Yesterday.”

“Is there another road?”

“Well, you could go through Hanson’s pasture and across the woods and then there’s a good, solid road there, all the way down to the creek.”

“Is there a bridge across the creek?”

“No.”

“What are the other roads?”

“Well, if it’s a car road that you want, there’s one the other side of Miller’s patch, it’s paved, it’s the best road for a car, you just turn to the right by the schoolhouse and—”

“But that road doesn’t go to the factory, does it?”

“No, not to the factory.”

“All right,” said Rearden. “Guess we’ll find our own way.”

The Starnesville man gives Rearden three theories that are mostly fine, but Rearden decisively refutes them because they fail at his goal of actually getting to the factory.

1 Like

Yeah, that’s a nice scene. Check out these too :)

https://www.yesornophilosophy.com/rand-quotes

2 Likes

I agree that decisive arguments are the ideal, but making every argument decisive seems extremely difficult. I have been trying to put some of my every-day dilemmas into this Yes/No framework, but I have been failing. I’ll demonstrate my confusion with an example (the simplest one I have).

There’s a trivial problem that I’m always indecisive about, which is: Should I go to the office today or should I do my work at home? (I’m a PhD student, so going to my office is completely optional for me).

My real situation is a bit more complicated than this, but for simplicity let’s suppose that my indecisiveness can be boiled down to a situation where I only want to meet three goals

G1: Work in a place with a comfortable desk.
G2: Work in a room with windows.
G3: Get work done today.

and I can only think of two courses of action:

A1: Go to my office and work.
A2: Stay home and work.

The problem is that going to the office satisfies G1 but not G2, and working at home satisfies G2 but not G1. Thus A1 and A2 fail to solve the problem

P1: Find an action that would satisfy G1 and G2 and G3.

I actually know of more actions than just A1 and A2, e.g. maybe I could work in the library instead, but let’s suppose for sake of argument that I hadn’t thought of that yet, but am nonetheless in a situation where I need to solve P1 right now. I have decisive criticisms of A1 and A2, so I should not perform either action, and (we’re assuming) I know of no other solution. What, then, is the rational thing to do?

Non-solution 1:

My usual thinking on matters like these is incompatible with Yes/No Philosophy, and it looks like this: G3 is a much more important goal than G1 or G2, which barely matter at all. Therefore I’m not going to waste any time thinking about a solution, and I’m just going to let my whims make the decision for me about whether or not to go to the office.

Non-solution 2:

The Yes/No Philosophy solution, from what I can piece together, would be to employ an “algorithm” like the one in Fallible Ideas – Avoiding Coercion. I should replace P1 with another problem:

P2: Given that I don’t know of any actions to take that would satisfy G1 and G2 and G3, what should I do?

I don’t think I really understand what a solution to P2 would look like, since the question is kind of vague. I’m interpreting it as meaning something like the following:

P3: Given that I don’t know of any actions that would satisfy G1 and G2 and G3, and given that I think G3 is more important than G1 and G2, are there any possible actions which satisfy (G1 and G3) or (G2 and G3)?

The answer to P3 is trivially yes. Some solutions are:

A1’: Go to the office, try to ignore the fact that it sucks not having a window, start working immediately.
A2’: Stay home, try to ignore the fact that my desk at home is a bit uncomfortable, start working immediately.

But it doesn’t seem like this has actually solved anything, because I have no more certainty about what to do than I had before: how could I possibly decide between A1’ and A2’? And aren’t A1’ and A2’ only superficially different from A1 and A2?

Possibly you’re wanting the impossible, which is an error. But we shouldn’t jump to that conclusion. You may have other options like going to a cafe. You may be able to modify a plan, e.g. by getting a new desk for your home. But if those things don’t work, or we just exclude such possibilities for simplicity, then you can’t get all 3 goals and must choose what you want.

Not all goals are possible, e.g. if you wanted to travel to Mars in 1 second that’d be impossible, both in our current context and also, as best we know, it’d still be impossible with any future technology. A goal like “work somewhere with windows and nice desk, without buying a desk” can also be impossible in the current context, though it’s certainly not impossible in the future with new resources, technologies, etc.

If you know no way to satisfy that goal, choose a different goal. The simplest way to do this is by simply removing one or more sub-goals from a multi-part goal.

YesNo helps you lay out your options clearly. work at home decisively solves one 2-part goal, and work at the office decisively solves a different 2-part goal. both decisively fail at the 3-part goal. from there, you’ll have to decide what you want (which will be one of the two part goals or some creative alternative).

what you should definitely not do is keep the 3-part goal and then take an action which you expect to fail at your goal. in other words, you should not work at home for the purpose of achieving all 3 (sub) goals, since you know that won’t work.

i don’t see how credences, medium strength arguments, weighted factors, or other alternative epistemologies would help with this, given my criticisms of those things.

that’s fine if you’re fairly indifferent between the goals (g1+g3) or (g2+g3). if you don’t think that’s important then you can pick either without a good reason. you don’t have to optimize everything. i imagine it would be possible to come up with some reason to pick one over the other on a particular day, but i can also easily imagine that you’re better off using that effort for something else.

EDIT: btw, i was discussing how to pick on one day. decision policies over time can be a bit different. my guess is it’d be advantageous to work at both places some, rather than only using one every time. e.g. if you always picked the same one, then you’d never run into ppl IRL during certain times of day and days of the week at the other places. whereas if you do some of both, you can get some IRL contact with ppl at both places at those days/times.

great

1 Like

I don’t know how to decide what new goals to adopt.

I know abstractly that goals are knowledge, and that acquiring better goals should happen through a process of conjecture and criticism. I have some idea of what that process looks like in the case of a scientific theory, or of evolution, or of language interpretation or whatever. But when it comes to goals/morality, a lot of questions seem really arbitrary and unanswerable (even my dilemma of whether or not to go to the office tbh).

E.g. Should I want vanilla ice cream, or should I want chocolate ice cream? Maybe I just haven’t fully purged the is/ought barrier stuff from my system yet, but I haven’t the slightest clue how I could think critically about the goal of eating chocolate ice cream as opposed to vanilla. Contrast this with science, where almost any well-posed question has a definite answer.

Agreed.

Never mind. I think I was being a rationalist here. I was seeking justification without reference to contingent facts. In reality there’s always going to be something that differentiates chocolate and vanilla ice cream. E.g. one is cheaper or one tastes better to me, or I had vanilla last week and am sick of it.

OK. A few relevant things.

Epistemologies aren’t like cooking recipes. There are no step-by-step instructions to get the answer. They provide some guidance/structure/organization/tips to help you think better, more effectively, more objectively, more rationally. They also explain conceptually how thinking works. But you still have to think and use creativity, and you could think well or poorly – you could be clever or you could miss lots of stuff.

Re “arbitrary and unanswerable”, a major issue is complexity. Some areas of life are really complicated so it’s hard to do exact or comprehensive thinking. That makes it hard to get “right” answers because you ignored some factors, made some estimates, used some approximations, etc. But lack of perfection, completeness or comprehensiveness doesn’t mean your answers can’t be objective and meaningful. You can do some analysis, do it well, and do a good job of focusing your attention on a good sample of the important issues. (Or actually, for systems with many dependencies and 1-5 constraints, you can focus significant attention on every key part. But I won’t get into that now.)

Maybe you just need examples of rational analysis of complex issues where it’s hard to get anything resembling a complete right answer.

With where to work, there are many factors that matter. You can make a decision that isn’t wrong according to any of those factors. That may not constraint you to a single choice on a particular day. There isn’t just one right answer (unless you do ~unlimited thinking to work out every detail perfectly), but there are many wrong answers you can avoid. The more you think, the more things you can rule out as wrong.

Have you been to the office recently? If not, you should go today or soon. Why? Because never showing up might be noticed by someone. And because some information may be posted on campus. There could be a big sign on the door to your building. There could be fliers everywhere about some political issue. It’s good to have some idea of what’s going on at a school that’s relevant to your life. Often you’ll go and there’s nothing notable, but it’s still worth going occasionally so you aren’t just fully out of touch. And one of your colleagues, peers, advisors or something might want to speak to you and make a plan to catch you in your office. Or he might vaguely intend to speak to you and remember if you bump into him IRL. You might do that – you might think of things to say to someone if you see them in person. And even if you don’t prefer IRL conversations, you might get better results saying certain things to certain people in person than by email. And even if you don’t care about face-to-face socializing, other people do and their opinions of you matter to things like how friendly they are to your phd thesis defense (which is generally not even close to an objective or fair process – people who don’t like you can make it way harder and make you do extra work and change things). Working at the office also enables having lunch with someone there.

I don’t know who you live with, or their schedule, but they might like to see you during the workday sometimes. You could have lunch with them or chat with them during a break. Even if you live alone, you might bump into neighbors, see staff at a local store or restaurant, invite a friend for lunch who lives closer to your home than office, etc. Little things like doing laundry during the day (if there’s a shared laundry room) or checking mail during the day (if a shared mail area) can result in seeing people that you wouldn’t otherwise. You can evaluate that as good or bad. It’s not especially important. There’s a lot of flexibility. It helps argue against almost always working in one location, but it doesn’t constrain you to a specific pattern.

Have you gotten much light or fresh air recently? Been outside much? Gotten any exercise? Traveling to the office might be good for you. Even just walking to and from your car is a little bit of air and moving your body. Working with an open window letting in light and air can be good too. You should do that sometimes, not never. It doesn’t need to be all the time consistently though. You can make judgments about whether you’ve been too deprived of something recently and therefore should pick a working location, today or soon, to help with that.

Do you have issues with neck or back pain, or posture? Or is there a risk you’ll develop those issues from long physically-repetitive work hours? If so, don’t work in worse physical circumstances too often. Getting up for a couple minutes every hour and walking around a little and maybe stretching is a good idea, but also not sitting in the same chair or same position all the time matters. So both don’t always do the physically worse choice and also do some variety.

Spending much time thinking about this stuff can be bad, so the right thing to do may be to have a policy, e.g. assign particular days of the week to particular working locations and stop thinking about it. Or maybe being flexible about it is worthwhile when someone wants to see you, but the rest of the time you can stick to your defaults. Or maybe you have moods when you wake up in the morning and it’s worth considering what mood you’re in and which working location will work better for today’s mood, and then choosing by that and considering an adjustment only if you haven’t used one work location for two weeks in a row.

One could try to be more detailed but I don’t know how and there are many more important things to optimize. But for example, you could consider the color temperature of the lights at each location and its exact effects on your melatonin production schedule and sleep (maybe not relevant if you stop working early enough in the day consistently), and the heating and cooling systems and their exact effects on your physical body, and the chance of catching a disease from other people there, and your ability to change any of these things (put in new lights, set up an air purifier machine, wear warmer or cooler clothes, etc. and the costs of all these interventions – in general most of them aren’t worth the cost so there is an objectively correct answer – don’t do it. A few cheap alterations to work environments are usually good and more costly ones should be considered if you notice a significant problem but otherwise they aren’t worth it.).

What’s on the walls and how does seeing it affect you? There’s some objective answer to this. It’s complicated and subtle, and I don’t know how to determine it in detail. And I think it’s a small effect and we shouldn’t try to learn how to deal with it better. But there are objective truths about it. Issues like the type of art, or the color and texture of the wall, make some difference to you.

If you like one and not the other, get the one you like.

If you like both, don’t exclusively choose one over time.

If you had future technology, you could use nanobots to analyze your digestive system and taste buds and figure out the detailed effects of different ice creams (not just flavors but ingredients, temperatures, etc). There are answers to how those things work that are relevant to making your decisions about what to eat, though they often don’t make a decisive difference to any goal you care about. If you lived forever, and kept making progress, you’d gradually be more successful and achieving goals and take an interest in progressively more marginal goals, and you could eventually worry about details like these. But in the meantime you have much bigger problems that you should allocate attention to instead.

Yes those are all reasonable examples. Similar examples apply to where to work today. Though often the factors are minor enough that the cost of analyzing them enough to determine which is better is much more expensive than the gain to be had by choosing the better one.

1 Like

I think I’m familiar with analyses of this sort. I do them all the time. But maybe it would be a useful exercise for me to go through what you wrote and try to explicitly cast your examples into the form of decisive criticisms.

Suppose that I’m in one of these situations where I know about more than one right answer. Or in other words, suppose that I’m in a situation where I have several different un-refuted theories about what to do. If I have to act on one of them right now, which one of them should I act on? Should I choose randomly?

I knew this by the way. That’s why I put “algorithm” in quotes (below Non-solution 2).

Given your current knowledge and your time constraint, what should you do? Be roughly indifferent between your known solutions, and don’t work on a more ambitious goal since you don’t have time to. So you can choose randomly, by intuition, whatever (as long as you have no criticism of that method – like if you have a strong intuition, you might want to choose that one instead of random, or avoid that one instead of random in fear it’s a bias – you could have either opinion, or think the intuition doesn’t matter, depending on context like your ideas including ideas about your past experiences with your intuition).

This is similar to when you have plenty of time but you don’t think optimizing this issue is a good use of time, so the time allocated to the matter is near zero. Which is a very common case. There are tons of things where you find a good enough solution and move on instead of trying to come up with some additional, harder goals that some solutions meet and others don’t.

1 Like

What, btw, do you do if you have near zero time left (either in a big rush or else you just don’t want to budget time for this issue?) and no non-refuted solutions? You rapidly switch to less ambitious goals. (Generally you should give up early enough to have a little time left to make this retreat process orderly, and in case it this process itself runs into unexpected difficulties.)

Switching to less ambitious goal(s) can be fine or an emergency measure. Fine can be like “I just don’t care about this much, and don’t want to spend time on it, so when I realized my goal was hard I just switched it to an easy one.” Emergency measure can be like giving up something you wanted because you see that you don’t have time to figure out how to get it. Later you might post-mortem what went wrong, though some people use other approaches like trying to forget about it because it’s painful.

Why change your goals? Because all action should be purposeful, and you should always aim an action at a purpose that you think it will succeed at. If you think an action will not achieve a particular purpose/goal, then rationality demands you either don’t take the action or change your goal(s). That action/goal pair does not work (to the best of your knowledge), so it shouldn’t be acted on (unless you learn something new and change your mind).

What if you think it won’t work but you want to do it anyway in hopes that your reasoning for why it will fail is mistaken, since you don’t have a better option (given resources like how much more time, energy and thought that you’re willing to spend on this, as well as your current knowledge). That’s fine. But that is a goal change. Your new plan is not “Do action X to achieve goal Y” but “Do X, without expecting Y, but hoping to get Y anyway.” You don’t expect that second plan to fail. The goal of trying something in hopes of maybe getting a particular outcome, but probably not getting it, is easy to succeed at. When the goal is just to give it a shot, you can succeed at the goal whether or not it works at the original goal. You just have to make a reasonable try. Is giving it a shot reasonable? It can be. Trying it may be cheap, or the cost of the opportunity may already be spent. And you had some reasons you expected your plan to work, before you found a flaw. Those reasons may have more robustness or resilience than you realized. Such things happen. Also you can sometimes make adjustments on the fly, in the middle – like if the activity takes an hour you might find a fix for the flaw in the middle. But even with no fixes in the middle, you might just not fully trust your analysis and predictions about a complex thing, so there’s a chance your original plan will work (since it has various components designed to get an outcome, it might get it) and the criticism was wrong for some reason you don’t know. It happens sometimes. You also might learn something by seeing the failure happen – sometimes trying it out helps you get more details of how the failure works, which can inspire solutions (when failure is cheap, e.g. in video games with fast retries, it’s common to do stuff you think will fail just to make sure it works how you think it does, and to see what happens instead of only imagining it).

Continuing earlier thoughts not the preceding paragraph: People sometimes object that changing goals as a way to solve problems is a kind of cheating that trivializes life. Can’t we just make all our goals super easy and succeed all the time? That sounds kinda like Buddhism and giving up on earthly desires. My answer is you can’t/shouldn’t just arbitrarily change goals. And switching to a less ambitious goal can be a bad outcome in the big picture. But if you can’t achieve a particular thing right now, then don’t just fail at that – make some adjustment to take into account your knowledge of why you’ll fail. That’s better than nothing. More broadly, it’s never rational to act on a plan you think will fail (or in other words, never use an action+goal pair while having a decisive criticism of it, or in other words never live by known errors (known to you)).

1 Like

Okay, this completely makes sense. Brilliant. Thanks.

That was one of the things I was wondering in thinking about goals. If someone asks me how to succeed at X, it looks like it’s always going to be easy for me to tell him that he can do it by changing his goal to Y instead.

I now realize that this is not necessarily easy at all, because there’s always a background goal Z lurking around, and the problem of finding a Y that satisfies Z might be really hard.

16 posts were split to a new topic: TCS and Coercion

I was reviewing this thread to try to understand why no followups and no similar threads later. I think there’s a pattern across many people where they will do some things early on and then change their posting in some way. In short, I read this thread as trying to learn/study CF in some detail and other later threads as largely not doing that.

I didn’t find a clear answer.

One thing I was looking for is something unwanted about it that could lead to avoiding similar threads. Like a bad outcome or a sign of a negative experience in what was said. I noticed high formatting effort in one post (with headings and tons of bolds) which is something I could understand someone not wanting to repeat since it takes work, but I think that’s just a side issue (you could write similar post content with low formatting effort).

Another thought is that people act less familiar at first and more familiar later like they’re my casual buddy, and that causes both some upsides and downsides.

Another thought is that after spending more time on the forum people may copy the behavior of other people here more. I think that should be split into two cases: copying me or others. The main problem with copying me is that people misunderstand me and do it wrong (and don’t ask many clarifying questions) – cargo culting. For copying others, besides inaccurate copying, there are also major issues of copying their flaws or copying things that work better for their context (of having been here 5+ years, having had many past conversations, having read tons of stuff in the past, etc.). Sometimes people copy stuff from people who are kinda stuck and it helps them get stuck too… Maybe this would be clearer to people with a weaker word for “copy” like “learn” or “adopt” – it’s just partial copying done primarily subconsciously.

Another thought is that people don’t want to spend very long in a learning phase. There’s a time limit before they want to be non-newbies, regulars, people already familiar with CF, etc.

Another idea is that they were trying to be a critical peer, but it ended up in a learning interaction which is not what they wanted, so then they avoid repeats later.

I looked for red flags in this thread, in case I missed some the first time, but didn’t find them. Sometimes when I know someone better I can find red flags in their older writing. I did notice an instrumentalist/anti-conceptual/anti-explanatory/pro-predictions attitude that was relevant to later discussion:

I also noticed a type of error that was repeated later (bold added):

But then 18 minutes later, before anyone replied:

There’s a mismatch here between confidence level and idea quality. The really strong wording was followed quickly by a retraction. That’s poor “calibration” as Less Wrong calls it (or it could instead be due to some kind of bias, emotions, heated attitude). It’s important to have pretty good meta knowledge of how good your knowledge is – to know how much you know about something, when to hedge or emphasize, and when stuff does or doesn’t need more thought.

Another kinda similar example from later:

You’re right. I don’t know how I was able to blank out the fact that primes are essential for all these basic algorithms of arithmetic.

And IIRC there were 3+ times he wrote something then soon edited to delete it and rewrite it to say something else. A couple times he edited it to say “derp” before editing again. Those indicate, roughly, that he thought he’d reached a conclusion that he was confident about and could send as a final message, but that evaluation was way off, and his own current knowledge was already capable of seeing that the content and evaluation were wrong.

The errors that came up again later are just minor though. I mean they’re important in general but they aren’t important to what I’m reviewing. They don’t explain fewer later posts trying to study CF.

I read this thread as sharing questions/confusions being rewarded with high quality, helpful answers. It looks positive so then why wasn’t there much attempt to repeat the experience later? And I think I’ve seen similar from lots of people.

Some brainstorming on why—from my perspective—I didn’t post follow-ups:

  • Various RL issues (multiple exams) caused me to be abnormally busy from a few weeks after this post, all the way through until this morning. I still technically had time to post on CF, but whenever I was on here in the past couple months it felt like procrastination, so I mostly tried to avoid doing hard stuff.

  • Some of my confusions with YesNo are really confusions with CR, and I don’t want to waste your time with those things without having made any effort to find the answer on my own. I have been planning to read Popper for a while, but have been putting it off because of above reason.

  • Some of my confusions about YesNo are/were really confusions about goals, and I think there was one point where I was going to make a follow-up post but decided against it because it was too hard to make it relevant to my question without including RL details that I didn’t want to include. I think I probably wouldn’t be averse to doing something like that now, but I can’t say for sure because I don’t remember what the specific question was.

  • I think the copying the behavior of other people on CF thing that you mention was relevant to my situation. Basically no one on CF makes threads like this, so I think a part of me was worried it would be “weird” or something. However, now that I’ve explicitly identified that, I don’t think the social dynamics crap will affect my decisions anymore.

  • I also just forgot about this thread at some point and stopped thinking about it.