Lmf's quick questions about CF

Topic Summary: I will use this thread to ask questions about CF that are not long enough or complicated enough to deserve their own thread.

Goal: Better understand what CF says, on its own terms. In thinking about some of the stuff in another thread, I realized that it would behoove me to better distinguish those of my criticisms of CF which are substantive from those which are merely semantic. To do that, it would help to better understand what CF is actually saying.

CF relevance: Obvious.

Do you want unbounded criticism? (A criticism is a reason that an idea decisively fails at a goal. Criticism can be about anything relevant to goal success, including methods, meta, context or tangents. If you think a line of discussion isn’t worth focusing attention on, that is a disagreement with the person who posted it, which can be discussed.) Yes.


Q1) What does CF say that an idea is?

I want a definition. In this article, ET says

You can define “idea” in different ways.

but I don’t think he proceeds to define it.

CF says that an idea is some thing which can fail or not fail at achieving a goal, in a given context. But I don’t understand what this “thing” is.

My definition is something like the following: an idea is an attempted identification of a fact of reality. My concept of “idea” is very close to my concept of “proposition.”

CF’s concept of “idea,” whatever it is, seems to disagree with mine, for I say that a fact of reality is either true or it is not true, regardless of what one’s goal is. Relatedly, I think that an idea is not the type of thing about which we can predicate that it works or doesn’t work. E.g. there is no sense in which an idea like “that’s a chair” works or doesn’t work. All I can predicate about “that’s a chair” is that it is true, it is false, it is probably true, etc.

An idea is like a thought but involves a little more structure or organization. A thought or idea is a mental unit (which is often a grouping of other mental units, not a simple, basic thing). The specific details of the data structure aren’t known.

So, an idea is a goal-oriented thought. It succeeds or fails at goals. It can be evaluated in terms of goals. It’s not arbitrary, pointless or purposeless.

This is a broader concept than how you’ve taken it:

there is no sense in which an idea like “that’s a chair” works or doesn’t work.

Propositions succeed or fail (work or don’t work) at the goal of being true. CF’s concept of ideas is a superset of propositions.

And propositional thoughts can also be evaluated for other goals too. There are many reasons one might think or say something is a chair besides just to try to make a true statement. E.g. one’s idea that something is a chair might work, or not, for the purpose of choosing something good to sit on.

My concept of “idea” is very close to my concept of “proposition.”

I think propositions are too narrow of a category. “Getting a burger” is an idea but not a proposition. It’s a plan of action which may succeed at a goal like satisfying hunger. It could be a component of a proposition like “Getting a burger will satisfy my hunger” but I think the plan itself – just getting the burger – is an idea on its own. You could think about it independently from your hunger and could use or evaluate it for other goals.

1 Like

Can an idea be arbitrarily complex?

Like, besides considering some propositions and noun phrases to be ideas, am I correct in assuming that CF would also consider Atlas Shrugged or the blueprints for an airplane to be ideas?

Calling an entire book one idea would be unusual terminology and I mostly try to use words in ways that are reasonably close to standard English terminology. And it’s really hard to avoid refutation with that many targets for criticism. And I haven’t found any need to call something like Atlas Shrugged one idea because it covers many topics, but the blueprints for designing one single type of airplane do sound like something I might refer to as one idea. I don’t think CF philosophy relies on any length or complexity limit for the term “idea”.

The reason why I asked is that I am trying to understand what CF means when it talks about ideas that contain counterarguments to attempted refutations.

In order for an idea—an idea by itself, with no outside help—to address every attempted refutation, in order for it to answer an entire library of criticism, it would sometimes have to be very long and complex. In particular, this must surely mean that many ideas end up being so long and complex that we can’t store the entire idea in our crow consciousness at once.

Do you deny this?

Because it’s not clear to me that an idea which doesn’t fit in consciousness agrees with standard English usage. Such a thing would also seemingly contradict your statement that an idea is a mental unit.

Ideas often contain what could be called footnotes to other ideas. The footnoted ideas are in some sense part of the first idea and also in some sense separate.

I don’t understand what you mean. Could you give an example of two ideas, one of which contains footnotes to the other?

edit: never mind. I think I maybe get it now. Thinking.

In an old thread, ET emphasized the following philosophical principle:

Q2) What is meant by “safe”?

My normal way of thinking about the safety of ideas is that it is a relative term. Because of fallibility, no conceptual identification is ever perfectly safe, so technically, we can only say that some are more safe or less safe than others. But that contradicts CF, because it is an epistemological degree judgement. As a result, I don’t have a crystal clear idea of what is meant. (Does it mean that the idea that you should ignore the error is refuted? If so, why? What refuted it? And relative to what goal?)

Here’s an example that may further clarify my confusion:

ET, like everyone else, sometimes makes typos. A typo is an error, because when someone types a word, he has the implicit goal of spelling it correctly (with some exceptions, e.g. if the goal is to quote somebody who made a typo).

It is hard for me to imagine that ET actually understands the underlying cause of why he makes typos. Knowing that would require a knowledge of the inner workings of the human brain that I don’t think anyone possesses at present. He probably knows some things about his typos, like I bet he has some knowledge about situations where they are more likely to occur, but that isn’t the same as understanding a cause.

So in what sense is it unsafe for him to ignore typos? He seems to think (rightly) that his typos are not a big deal.

It’s common that terms can be used both in terms of degrees or in a binary way. E.g. “it’s not hot out” is a standard thing for someone to say who would also say “it’s very hot” or “it’s hotter than yesterday”.

The use of a term in a binary way does not imply a disagreement with using it in a degree way too. Not does it imply perfection. E.g. saying it’s not hot out does not mean it’s perfectly cold, or that there is zero heat outside.

Degrees of safety are not epistemological degree judgments (although there are many ways to define them and you could probably come up with a version that is). They are not degrees of goodness of ideas, degrees of truth, degree of belief, degrees of strength of arguments, or that kind of thing. I don’t see safety as playing a key role in epistemology.

I do know about the underlying causes of typos. I don’t have perfect knowledge down to absolute foundations, but I have substantial knowledge going down several layers.

I’ll try to explain what I was talking about.

I have repeatedly observed people make mistakes, have no significant understanding of the cause, and not think that’s important. I think they’re wrong and that behavior is unwise and unsafe.

E.g. they will say something illogical that doesn’t make sense. Then they won’t want to investigate the issue. Without using these words, they basically put it in the category “My mistake was I said stuff that didn’t make sense”, and then are satisfied with that (because they do it routinely and think that’s unavoidable?) rather than wanting deeper knowledge of what error they made, why, how they made it, how to prevent it from reoccurring, how to keep it within boundaries so it doesn’t cause arbitrary problems, etc.

Typos are something I have within reasonable boundaries; they aren’t causing unbounded problems for me. I see people with errors that they can’t contain to any clear boundaries at all, because they barely know anything about what happened, and they still resist investigating the error. They often dismiss the error as small/unimportant, without being able to give any explicit reasoning for how/why they know it’s small/unimportant, and in cases where I doubt it’s small or unimportant.

Yes, I know this.

But the fact that the concept of safety measures something that lies on a continuum means that it has to be used as a relative term. If you want to say as a binary judgement that something is or isn’t “safe,” you have to have some context in the back of your mind; you have to be implicitly comparing its safety to the safety of something else.

Part of my problem is that I still don’t know what context you have in mind for your judgement about safety. See also below.

To clarify, I took safety here to be used in the metaphorical way, where an idea is “safe” insofar as it is not in “danger” of being refuted. For example, compared to most of my other ideas, my idea about what I had for lunch today is extremely safe, but my idea about what I’m going to have for dinner today is not safe at all (i.e. I have some idea, but I could easily change my mind). I have seen you use “safe” in this exact manner before, e.g. you once said something about Objectivist forums on which all of Ayn Rand’s ideas are safe.

It seems almost obvious to me that this notion of safety plays a key role in epistemology. And it is closely related to what I mean when I talk about degrees of confidence in an idea: I think I should have a high degree of confidence in a safe idea, and a low degree of confidence in an unsafe idea.

But it sounds like here, maybe you meant “safe” more literally, as in like you think a person is incurring danger, injury, or risk?

Okay, then what are the underlying causes of typos, in the case of a person like you who has an enormous amount of experience writing and reading? What is the “significant understanding of the cause” that you possess?

I can imagine understanding the underlying cause of some typos. For example, maybe some of a person’s typos are because he has his spellchecker disabled, or maybe some of a person’s typos are because he needs glasses and can’t see the words on his screen clearly, or maybe some of a person’s typos are because he never ended up figuring out how individual letters are pronounced in the English language. I could also imagine someone writing “it’s” instead of “its” because he doesn’t understand the difference. But generally, I think explanations like that only work for a very limited subset of typos, and that the best explanation for the rest of them (if we had it) would look more like “neuron 1665918C fired incorrectly.”

All of that is just a long-winded way of saying that I would like a specific explanation of what causes your typos, rather than a general explanation of what causes the typos of a village idiot. (Or, if you don’t want to get personal, you could give specifics for a hypothetical writer who is at approximately your level.)

If it’s hard to explain… don’t worry about it I guess. I am curious, but this was kind of a digression anyway. I already learned the information which is important with respect to my original question, which is that you think you do understand the causes.

This is hard to reply to because I think it has parochial errors which would require clarifications and significant effort to discuss while not leading to important benefits.

1 Like

Here is an example that is much simpler than my one about typos, but is—I think—of a similar nature:

There are two rooms that I commonly enter. These rooms are both locked with keypads, which have different passcodes. The passcode to one of them is something like 1-2-34, the passcode to the other is something like 4-1-23.

Commonly, when I enter the first room, I absent-mindedly key in the passcode of the second room, and vice versa. This is an error. What would you speculate is the cause of an error like this? (Edit: and if you don’t know, what sort of action would you take in my situation? Would you ignore the error like I am doing?)

In case it’s not obvious, I am not asking this because I’m trying to fix a problem having to do with doors. (I don’t think the error I described matters. I don’t consider it a real problem, and I think it is safe to ignore an error like this in any realistic context.) I am asking because I am trying to understand something about CF.

Some potentially relevant information:

  • The two rooms are easily distinguishable: I never get one room confused for the other.
  • The two keypads are identical.
  • The numbers on the keypads are legible: I don’t ever have trouble telling which button is number 1, which is number 2, etc. (I almost never read the text anyway, I know which button is which by its position.)
  • I don’t have problems with mechanically keying in the code once I decide to key it in. Like I never “fat-finger” press 5 instead of pressing 4.
  • I never forget the passcodes, and I assume that I would never make this sort of error if I were in full focus.
  • I immediately key in the correct passcode after getting it wrong the first time.

(I assume this won’t be as hard to reply to, but please let me know if I’m wrong.)

The error sounds fixable but not high priority. And it might take more time and effort to fix it than to keep making the error until you stop using these keypads.

The error causes failure at goals like “get through the door on the first try” but not on goals that are very important to you. You’re still getting the door open pretty quickly and reliably, so your current method succeeds at goals like “get through the door in a reasonable amount of time” (that goal is important for some of the goals you care about in your life).

It sounds like the passcode entering habit was created in connection with the keypads not the rooms. No step was included in the habit to check which room you’re in and then choose the right passcode. This could be fixed with intentional practice sessions or by using the keypads in a more slow, deliberate, conscious way until your subconscious learns the new steps to follow.

I believe that people have free will, and that it’s impossible to force someone to see the truth if they are committed to evading reality.

I think this belief clashes with CF’s ideas about debate, because I think it entails that it is impractical to win a debate against a person who chooses to be dishonest.

I will explain what I mean by quoting a story from Surely You’re Joking Mr. Feynman, recalling a time when Feynman was lodging in a rabbinical seminary:

And then one day – I guess it was a Saturday – I want to go up in the elevator, and there’s a guy standing near the elevator. The elevator comes, I go in, and he goes in with me. I say, “Which floor?” and my hand’s ready to push one of the buttons.

“No, no!” he says, “I’m supposed to push the buttons for you.”

“What?”

“Yes! The boys here can’t push the buttons on Saturday, so I have to do it for them. You see, I’m not Jewish, so it’s all right for me to push the buttons. I stand near the elevator, and they tell me what floor, and I push the button for them.”

Well, this really bothered me, so I decided to trap the students in a logical discussion. I had been brought up in a Jewish home, so I knew the kind of nitpicking logic to use, and I thought, “Here’s fun!”

My plan went like this: I’d start off by asking, “Is the Jewish viewpoint a viewpoint that any man can have? Because if it is not, then it’s certainly not something that is truly valuable for humanity. . . yak, yak, yak.” And then they would have to say, “Yes, the Jewish viewpoint is good for any man.”

Then I would steer them around a little more by asking, “Is it ethical for a man to hire another man to do something which is unethical for him to do? Would you hire a man to rob for you, for instance?” And I keep working them into the channel, very slowly, and very carefully, until I’ve got them – trapped!

And do you know what happened? They’re rabbinical students, right? They were ten times better than I was! As soon as they saw I could put them in a hole, they went twist, turn, twist – I can’t remember how – and they were free! I thought I had come up with an original idea – phooey! It had been discussed in the Talmud for ages! So they cleaned me up just as easy as pie – they got right out.

They won the debate against Feynman, not because their ridiculous position is true, but because of the creativity they (including centuries of other Talmudic scholars) put into winning the argument.

Feynman wouldn’t want to spend much time debating them, because he knows that he has very little to gain from winning the debate. He’s right, and he knows it. He has better things to do.

On the other hand, the rabbinical students have everything to gain from winning debates like this. It’s their whole career. And more importantly, they feel an intense need to rationalize their views; they need to feel that their views enjoy the sanction of reason. And they can put tons of effort and creativity into coming up with a clever argument. It doesn’t even need to be that clever, it just needs to be a pain to refute. And if someone does waste lots of his time refuting them, well they can just keep trying and waste even more of his time.

[I think CF would say something like: if you can prove that this is what the rabbinical students are in fact doing, then you have won the debate. But the problem is that it’s extremely difficult to prove someone is being dishonest. E.g. From introspection, I think that dishonesty is extremely widespread, and there are tons of people I *suspect* of being dishonest, but I have not once been able *prove* that anyone is being dishonest (in an intellectual context, that is).]

Now, for my question. Suppose that these rabbinical students came onto CF and wanted to “prove,” via debate, the rationality of not working on the Sabbath or whatever other nonsense position. For the same fundamental reason that they won the debate against Feynman, i.e. that they are dishonest and Feynman is not, is it not plausible that they could win the debate tree on this specific issue?

Do you want to play devil’s advocate and try to win a debate from their side?

Also: Brandolini’s Law

Probably not: Seeing your suggestion prompted me to spend a lot of time thinking about what sort of arguments I could make and how it would go, and I am now much less sure that it is easy for the rabbinical students to win.

The observations which led me to write my post were the countless hours of arguments that I have had in the past with people like the rabbinical students. However, I’m realizing that these arguments (on both sides) were deeply incompatible with CF’s framework of decisive arguments.

I remain intuitively unconvinced, maybe because of my ignorance about CF.

I think the main reason Feynman lost is he engaged inside their area of expertise, which is fine to do if you want to but unnecessary. Engaging with them becomes important when they make claims about my field. E.g. if they say the Sabbath has a fundamental role in the (objective, general-purpose) theory of epistemology/rationality, that claim would be relevant to me, but I’d be able to engage with it largely by speaking about my own area of expertise, not about the details of their religion. (Similar to how I have to learn some details of theories of induction to engage with rivals, but only a limited amount, and I can still win debates even if they spent a lot more time on induction than I did.) The stuff they’ve spent many hours studying won’t help them much with answering standard questions in my field like how do you evaluate competing knowledge claims and why is that way good and what is wrong with alternative ways.

And, yes, if I think their methodology is bad I can challenge that rather than engage within it. The relevant types of methodology are part of my field.

That’s understandable. It’s hard to demonstrate how great CF is (or find out it’s not) when no one will debate. I have the secondary claim that I’m open to debate and they aren’t, rather than the primary claim that I won the debate, and that secondary claim is less satisfying (and is commonly claimed by liars).

I also have the (not terribly satisfying) claim that their writing is poorly written/organized and/or doesn’t address a bunch of key questions to the point that it’s very hard for me to make a productive, deep debate tree without their participation. Or put another way, most rival ideas assume (without arguments, or with only limited arguments that don’t address my reasoning) premises that I disagree with, so almost all their literature is irrelevant to my debate with them. In my experience, most people don’t know enough about their own premises, and don’t want to deal with that stuff, and that’s actually one of the significant factors causing people to not want to debate me.

I don’t know how to fix the situation (without e.g. 20 skilled helpers or a very rich or very popular helper) and my rivals don’t offer Paths Forward.

Yes. I think I came to approximately the same conclusion after writing my post.

I was thinking of it as: by making his argument at all, Feynman implicitly accepted their premises. This put him deeply inside in their territory (their “area of expertise”), so he had basically lost the debate before it even started. He’s debating on their terms. Objectivism actually talks about that a lot as something which you shouldn’t do.

The way to actually win the debate against the rabbinical students (“win” in the informal sense at least) is to go after their more basic premises, by asking them things like: How do you know that it’s bad to press elevator buttons on the Sabbath?

1 Like

Let’s take an idea like P := “It is plausible that aliens exist.”

I think CF says that if you take P completely literally and standing on its own (not as something which includes e.g. footnotes), then it is refuted, because it doesn’t address the criticism “why?”, the criticism “source?”, or things like that. It doesn’t explain the reasoning.^‡

Am I wrong? If not, how do you get around the following issue?

In standard logic, we say that a proposition is refuted if its negation is proved. So to refute P is to prove

\neg P := “It is not plausible that aliens exist.”

Clearly “why?” is not a proof of \neg P.

Also, the claim that “why?” refutes P is incompatible with another thing CF says, which is that a refutation of an IGC triple is a reason why the idea fails at achieving the goal in the context. Formally speaking, P is an IGC triple where the idea is the thing in quotes, the goal is describing reality, and the context is the obvious one (e.g. I mean existence in real life, not existence in a novel or something). So to refute P is to give a reason why the thing in quotes fails at the goal of being true. In other words, to refute P is to prove that the thing in quotes is not true. In other words, CF also says that to refute P is to prove \neg P.



‡ I understand that, of course, P being refuted by “why?” doesn’t mean very much in the end, because I can easily make a new idea which has a footnote addressing the criticism. Like P' := “It is plausible that aliens exist. This is because Earth has life, life must have sprung up from non-life somehow, the universe is very big, and statistically speaking there must be a lot of other planets with similar conditions to Earth.”

1 Like