LMD Async Tutoring

I think I’m not finding the Embodied Inc/Moxie topic in particular that interesting any more.

I’m still finding the Friedman/Maximising Profits stuff interesting.

I’ve been finding your ideas about resolving conflicting ideas interesting and have been reading through that article and its links and stuff recently. It’s something I’d like to understand and know how to practise and get good at. Is there something we could do that relates to that somehow? What do you think?

ok well skipping ahead to a few of the answers then (allegedly, in my opinion, according to some online sources… this is just a hypothetical discussion for training purposes that is loosely based on a real story, not an accusation against any real company):

If they did anything to benefit investors or creditors, at the expense of new customers they were just taking on, that is problematic.

If you accept a loan from one person to pay off an existing loan from someone else, shortly before going bankrupt and being unwilling/unable to pay off your loans, that was doubly unfair. You shouldn’t bring on new creditors in those circumstances (unless they are fully informed about the situation) and you’re also distributing money unequally between your creditors.

If you are taking on new customers in order to benefit investors (including founders) it’s even worse.

Customers were (allegedly) lied to about what was going on with the company, for months, with various red flags (like phone support going offline) being downplayed and excuses being made. One of the effects here was to prevent them from asking for returns within 30 days since they didn’t figure out they were being lied to until later.

In general, when you sell something with a return policy or warranty, you should be setting aside enough money to cover that, unless you clearly have the cash flow or pre-existing assets that you don’t need to (in which case you should still have accountants keeping track of this stuff, etc.). The money isn’t just immediately available to buy anything; some of it gets allocated to specific categories. If you aren’t doing that, there are fraud issues because customers would reasonably think that you were – that you had a system in place to allow you to keep your promises. If you had a reasonable system but stop using it because you’re behind on other expenses, and then you keep selling to new customers knowing you’re on the verge of failure, that’s really bad. Well, with a normal product, it’s not that bad. Normal products keep working by themselves, aren’t super likely to break, and might be able to be repaired by a third party if they do break. Whereas they knew this product would fully stop working immediately if the company failed, even if the hardware didn’t break, with no way for customers to repair it.

1 Like

sure. to begin, why don’t you brainstorm some options or freewite about what you’d like to do.

Reminder to do some writing on most days and share something at least every 2 weeks.

1 Like

Sure. I did a freewrite.

I’d like to discuss the ideas, ask some questions, and find some ways to practise pieces of the skill. Maybe there are small, hypothetical scenarios I could find that would be plentiful, that I could practise coming up with solutions for? Maybe there are more basic actual problems in my life I could try to try a more explicit method on? Maybe TV shows could be good places to look (?) A problem could be running into emotional bias with your own issues, so maybe starting with something that isn’t about my own life at first would be a good idea? What are some ways to break the skill down?

  • being neutral/objective/avoiding prejudice about which side of a conflict is right
  • not mistreating your intuitions
  • brainstorming solutions
  • criticising solutions
  • articulating problems
  • articulating intuitions
  • testing intuitions by offering hypothetical scenarios

Goldratt’s evaporating cloud diagrams look interesting and seem relevant.

I guess also I’d like to know how to approach an article/topic like this. The first thing that strikes me is, look for things you have trouble understanding. Check your understanding.

Maybe reading through and checking my understanding first would be good? Learning to discuss what I’m reading to check my understanding sounds fun and interesting.

Curiosity – Evaporating Clouds
Curiosity – Evaporating Clouds Trees

sure.

besides some unstructured notes or writing, i’d recommend writing down something structured about what you read like a tree or outline. it can be pretty short, focus on the major big picture ideas, and leave out most of the details. one way to organize it is to focus on (important non-detail) problems and solutions. the solutions can lead to new problems so there are multiple layers.

So you’re aware, I have been doing daily writing most days. I haven’t done anything very complete or non-personal recently that I’m happy sharing. I’ll try to find more things to do like that.

1 Like

Summary of Resolving Conflicting Ideas :

An important problem in philosophy is how to resolve conflicting ideas. Conflicting ideas come up in a lot of places: when making decisions, taking action, between minds, and within minds. CF says to resolve these conflicting ideas by creating new win/win solutions that address each side to the conflict, not by picking winners and losers of the conflict. We can do this by mentally modelling the ideas as people in a discussion, with us being the discussion’s neutral arbiter.


Notes:

Elliot thinks the problem of how to rationally resolve conflicts between ideas is one of the most important issues in philosophy.

problem: what should we do about the fact that we have lots of ideas that conflict/disagree with each other? implied answer: we should resolve the conflicts, and figure which ideas to accept and reject.

why? presumably because we rely on our ideas and knowledge for everything we do, so we want knowledge of good quality that wont lead us into error. Errors and conflicts in our knowledge will thwart our intentions and make our action ineffective or counter-productive. If we’re rational we want to pursue the truth. True ideas don’t conflict with each other, so resolving conflicts is truth-seeking (if we have conflicting ideas, we know that at least one of them is false).

problem: how do we evaluate the ideas and figure out what to accept?

Rather than trying to pick winners and losers among ideas, Critical Fallibilism says we should find win/win solutions which address all the good points raised by all the conflicting ideas.

So according to CF, picking which ideas in the conflict win or lose is wrong. Simply evaluating the existing, conflicting ideas and choosing one to accept is the wrong approach, according to CF. It advocates creating new ideas which resolve the conflict, not deciding between the existing, conflicting ideas. It sees the conflicting ideas, as they are, as inadequate in some way?

(Is this because a problem that each of the conflicting ideas have is that they are inadequate to address the conflict between them? (because otherwise there would be no conflict?) Like, maybe the fact that they conflict is a criticism of each of the ideas? I’m not sure about that idea. I don’t yet understand why just picking between the two ideas is wrong.)

So we can model the conflicting ideas within a mind or between multiple minds as people discussing, for whom we are the neutral arbiter. We try and find a solution that each party to the discussion is happy with: a new idea that both prefer to their original idea. We don’t side with one party over the other.

Elliot then links to a bunch of other articles that explain more about the ideas in the article.


Here is a tree that just focuses on problems and solutions brought up in the part of the article before the link section:

overall, looks ok. carry on.

Suppose

  1. Ideas X and Y contradict each other.
  2. Idea X has no refutation of Y.
  3. You have no other refutation of Y (that doesn’t contradict X).

And let’s set aside the case where you have another idea that refutes both X and Y.

This is a criticism of X. You have no rational way to conclude X when alternative Y is unrefuted. You should conclude that you don’t know: you have no way to decide between X and Y, so you can conclude “X or Y or something else” but shouldn’t conclude more specifically just X or just Y because you have no way to narrow it down to just X or just Y. (You can, under time pressure or other resource limits, decide to act on just X or just Y. You should know that’s a risk and that you prefer that risk to inaction, not start actually believing the one you act on is the correct conclusion.)

Sometimes it’s symmetric: Y also doesn’t refute X. Then both X and Y aren’t good enough currently to reach them as your conclusion.

Sometimes it’s asymmetric: X has no refutation of Y, but Y does have a refutation of X. In that case, one of the options is to conclude Y.

(I’ve omitted some details like about meta ideas and variant ideas.)

Cool, so, to check my understanding, is the criticism of X in your situation that it gives us no reason to prefer it over Y when we don’t already have a refutation of Y?

X would be improved by including in it a refutation of Y. That would give us a better idea: an idea that includes a refutation of a rival.

This criticism isn’t a refutation of X though is it? In your scenario, it wont allow us to conclude Y, right?

Is this because it’s a criticism in terms of a different goal, and not a goal common to X and Y? Or some other reason?


Summary thing:

In the situation you describe, we have a pair of contradicting ideas: X and Y. X doesn’t contain a refutation of Y. We also have no other refutation of Y. So we have nothing to decide against Y with. That’s bad for X: we can’t rationally conclude X unless we can decide against Y somehow. X could be improved by containing a refutation of Y to help us decide against Y and thus in favour of X. X not having this potential improvement is a flaw in X. But this criticism is not a refutation of X, so we still can’t reach a conclusion. We need a new idea to help us reach a conclusion.

Say X and Y are trying to solve P. Are they refuted for the goal of solving P? No. We don’t know that X won’t work for P.

Is X refuted for the goal of being able to confidently conclude “X solves P” given our current understanding? Yes.

We can’t rule X out yet, but we also can’t conclude X yet. Basically, the criticism here is about concluding X, not X itself. (X itself and concluding X frequently aren’t separated, but we can separate them if we want to.)

But how to view this stuff also depends on your mental model of ideas. There are two different options. Both are usable models for good reasoning, but they word and view things differently enough that it’s hard to answer your question precisely in a generic way that makes sense in both models. Do you have my Yes or No Philosophy digital product? The article “The Stability of Judgments” talks about this.

Some writing I did today while reading.


From Popper’s Objective Knowledge Chapter 1:

L1 Can the claim that an explanatory universal theory is true be justified by ‘empirical reasons’ ; that is, by assuming the truth of certain test statements or observation statements (which, it may be said, are ‘based on experience’) ?

We have seen that our negative reply to L1 means that all our theories remain guesses, conjectures, hypotheses.

Why does a negative reply to L1 mean that our theories remain conjectures?

They remain conjectures. They can’t be justified i.e their truth demonstrated. If we can’t demonstrate a theory is true, it remains a conjecture.

Why are they necessarily conjectures in the first place? It is their truth that we conjecture. L1 asks if the claim of the truth of a theory can be justified by empirical reasons. Popper answers no. This answer logically means then that the claim of the truth of a theory remains conjectural.

A theory only becomes a conjecture when we conjecture it. It is the act of claiming a theory is true (conjecturing the theory) that makes it a conjecture.

1 Like

I’ll respond more fully in another post but I wanted to respond to in the meantime and say no I don’t have the Yes/No product.

If you want to go through it (and/or the CF course) and take notes, ask questions, share thoughts, etc., that would be a good tutoring option. Something to consider.

Let me analyse this.

We can’t rule out X (for P):

Why? We can’t rule it out because we don’t have a reason that it’ll fail as a solution to P. In other words, we don’t have a refutation of X for P. In other words, we don’t know that X wont work for P.

We also can’t conclude X (for P):

Why? We can’t conclude X for P because we don’t have a refutation of Y. There is an outstanding, unrefuted rival to X for P.

This makes sense to me.

I’m having difficulty understanding this.

It’s a criticism of X about the goal ‘conclude X (for P)’, but it’s not a criticism of X for the goal P?

Maybe I’ll need to learn more about what you mentioned with the Yes/No article to get it?


They look cool and I’d definitely like to go through them at some point. Combining it with tutoring sounds like a good way to make the most of them too. I’ll consider what I can afford in the near future.

Some daily writing:

A position that one could take on generalisations is that they can only arise by a process of induction. In this view, you can only form the theory e.g., ‘all men are mortal’ by this process. The (alleged) process is observing repeated instances of men being mortal and then inducing the generalisation from these instances. In this view you can only have access to such a generalisation if you have observed these instances and performed this process.

But we need no such process to form such a theory or generalisation. We, (or a mechanical device like a computer) can simply write down any sentence of the form ‘all X are Y’. We could then consider what it means and if it’s perhaps true. Even if induction was a real process, generalisations or universal concepts wouldn’t be evidence of it. You can just make them up.

1 Like

You can conclude “at least one of X or Y is false” (it can also be a background assumption that’s wrong – you could be incorrect that they contradict).

You cannot conclude “X definitely won’t work”.

You should not conclude “I know X will work”. It’s an error to decide that you know X.

X is still under consideration, not refuted nor ready to be accepted. Reaching X as your conclusion, right now, is an error.

Does this make sense?

Whether X is true or false is one issue. Whether you should believe, accept, conclude or use X, now, is a different matter which can have a different evaluation.

Yes I understand. The truth has no contradictions, so we can conclude that something somewhere is false (whether X or Y or another assumption).

Right, because we have no refutation of X. No explanation of why X fails to solve P.

Right, because we have no refutation of Y. X has unrefuted rivals so it’s irrational to conclude X.

Yes this makes sense. We haven’t refuted X, nor decided against Y. So as far as we know X may solve P but we can’t conclude that it does.

I think I’ve got it now. There are two issues:

  1. Do X and Y in fact solve P? If you have no refutations of either, they both appear to solve P and you have no reason to think they wont.
  2. Can we conclude (decide between) X or Y? Only if you possess a refutation of one of them.

So our criticism of X, that it doesn’t help us decide against Y, is a refutation for the goal (2), but not for the goal (1). As far as we know X still solves P. But Y may also still solve P. So we can’t conclude X.

ok good i think we made some progress. carry on

1 Like

Okay I am going to try to connect this to the problem I started with.

CF says that instead of picking winners and losers in the conflict, we should come up with a new, win/win solution that addresses all the points raised by the conflicting ideas.

Why come up with new ideas, instead of picking winner/losers among existing ideas? Picking an idea means concluding it. To rationally conclude an idea, we need a refutation of its rivals that doesn’t refute it. If an idea doesn’t contain a refutation of its rivals already, then it alone can’t help us conclude it. So without new ideas, we can’t make rational conclusions about who wins/loses the conflict. In other words, we can’t resolve the conflict without creating the new ideas that resolve it. If we came up with some new ideas e.g., refutations, and modified the old ideas with them, then we could have a solution that resolves the conflict.

So it seems that CF’s recommendation (that we shouldn’t pick winners and losers) is due to the idea that you can’t actually resolve conflicts by picking winners/losers, only by creating new solutions? Is that right?

1 Like