Uncertainty and Binary Epistemology [CF Article]

The same problem does not occur when applying uncertainties or probabilities to something other than ideas. Suppose you think job applicants should be assigned probabilities saying how confident you are that they’ll perform well at the job. (I think a critical discussion using decisive arguments would be a better method.) So you decide Joe is 60% likely to be a good hire, or in other words you have 60% certainty that you should hire Joe. There’s no regress here. The idea of applying certainties to job candidates doesn’t imply that you should also apply certainties to your evaluations of job candidates. It’s only when your policy is to apply certainties to all ideas that each new judgment requires a new judgment which requires a new judgment and so on – because certainties are themselves ideas.

I’m kind of confused about why there wouldn’t be a regress here.

My manager asks how certain I am that Joe will be a good hire. I have the idea that there is a 60% chance that Joe will be a good hire. But I also know that the method I used to come up with this idea isn’t perfect. I think that the method I used will only be reliable 90% of the time.

So I should tell my manager there is a 54% (0.6 * 0.9) chance that Joe will be a good hire. Even that wouldn’t be accurate though. If my method is reliable for this particular case then there will be a 60% chance that Joe is a good hire. But if my method isn’t reliable for this case, there will be an unknown percent chance (x) that Joe is a good hire. So I could tell my manager that there is a (0.6 * 0.9 + x * 0.1) chance that Joe will be a good hire.

Furthermore, my manager could ask me how confident I am that my method is reliable 90% of the time. And I might be 95% certain that my method is reliable 90% of the time. If I want to provide an accurate confidence level to my boss that Joe will be a good hire, how can I do it without taking these additional (90%, 95%, etc.) confidence levels into account?

Like if I think my method is reliable 99.9% of the time, it would affect my answer compared to if I thought my method was reliable only 50% of the time. Or maybe I have two different methods for assessing my confidence level in a candidate. One is reliable 1% of the time, and the other is reliable 99% of the time. It wouldn’t make sense to present the results of those assessments to my boss without telling them that one method is significantly more reliable than the other. Because the reliability of my method affects how likely it is that I correctly predicted whether Joe will be a good candidate.

For example, the method with 99% reliability determines Joe has a 10% chance of being a good hire, while the method with 1% reliability determines that Joe has a 100% chance of being a good hire. If I simply report the results of my two methods to my boss without reporting the certainty level associated with each evaluation, my boss might think it’s a good idea to hire Joe.

If Joe ends up getting fired then it won’t be helpful to tell my boss that his policy was only to report the certainty associated with the job candidate, not the certainty associated with the evaluation method I used. I feel like my boss could say that the certainty level associated with a candidate is dependent on the certainty level associated with the evaluation method I used to assess the candidate.

Of course I don’t have to report it. But I feel my answer would be more accurate if I included the certainty level of my evaluation method. And also the certainty level for the certainty level of the evaluation method. And the more regressions I do, the more accurate my answer will be (assuming I could accurately determine those certainty levels).

So when my boss asks for a certainty level for a job candidate, it implies to me that my boss wants the most accurate and reliable answer. And I don’t see how I can provide that without doing a regress through all of the different certainty levels for my evaluation, etc. Obviously I can’t do an infinite regress, but maybe 10 or 100 levels of regression would be sufficient (and more accurate than just reporting the result of the evaluation).

I guess I don’t see a situation in the real world where you would apply a certainty level to an idea, but not take into account the certainty level for the method I used to come up with that idea. Like the potential for infinite regress is still there, ignoring that problem doesn’t make it go away, it’s just a problem you decided to arbitrarily ignore.

If you apply a certainty level to an idea there will always be an infinite regress, whether your policy acknowledges that regress or not. Does that make sense?

(This is an example of a post I wouldn’t have normally made because it seems dumb to me. But if I want to contribute more I’ll probably make a lot of dumb posts that can be ignored haha)

Your manager just asked your certainty of an idea. He asked you to apply probability/uncertainty to an idea, which causes a regress. Using probabilities in a meta way leads to regresses like this.

Attaching a probability to an outcome, like you’re glad you hired joe a year from today (or any criteria for “good hire” that you want), doesn’t trigger a regress anymore than saying “i have a 1/6th chance to roll a 6 on this die” does. But if you ask how confident you are about that 1/6th roll probability, then you’re asking about confidence in ideas (rather than about events in physical reality like the positions of some objects after some motion) and will run into trouble again.

English wordings of this stuff can be awkward.

Physical events, like Joe satisfying a client or dice rolls, have probabilities. (One can get into determinism but never mind.)

You can say: I considered a bunch of scenarios, weighted them by how likely they are, and determined that Joe meets some “good hire” criteria in 60% of them (taking into account weighting).

If you’re trying to talk about your own mental states, you’re doing a different thing than trying to forecast real world events.

If you’re 90% confident of your forecast rather than 100%, that doesn’t mean Joe is less likely to be a good hire. You shouldn’t multiply in that 0.9 to Joe’s score. It’s not about Joe. And if you became more or less uncertain, it doesn’t change anything about Joe. Talking about your mental states and about what will happen with Joe are different issues.

What you could say, while focusing on events in reality, is:

I forecasted some scenarios. I think they account for 90% of possible scenarios. In them, Joe is a good hire 60% of the time. In the other 10% of scenarios, I have no opinion. That means Joe will be a good hire between 54% and 64% of the time overall. My forecast puts upper and lower limits on his outcomes. For the other 10% I have no knowledge about it, so I’ll treat him as potentially being a good hire in 0% up to 100% of those cases.

This kind of forecasting is imperfect but non-meta. As long as it focuses on making statements about events in physical reality, not on certainties of ideas, then it doesn’t run into a regress.