ChristianKl on LW wrote:
Bayesianism has no rules for what someone priors should be. It has rules about how to progress from a state of having priors.
FYI this is problematic because whenever decisive arguments are used (e.g. a piece of evidence that contradicts some things and doesn’t contradict others), the probabilities/credences for all non-refuted ideas stay in the same ratios. In other words, you observe a black cat and you rule out some things like “there are no cats” and “all cats are white” and the probability from those refuted ideas is distributed to all non-refuted ideas in proportion to the probability they already have.
In other words, if you only use decisive arguments, all your non-refuted ideas are evaluated entirely based on your priors. The priors fully determine what wins.
It’s only indecisive arguments which can favor some non-refuted ideas over others. Decisive arguments can’t do that. In Bayesianism. As far as I know.
And IMO your priors should be essentially random because otherwise you’re presupposing some other ideas and intelligence before Bayesianism in order to guide you to have intelligent priors, but priors are supposed to come prior to intelligent thinking. In practice they always seem to use priors in line with common sense and science, not random priors.
I think their answer is that most or all arguments are indecisive, e.g. if you see a cat it doesn’t decisively refute “there are no cats” it just lowers the probability a lot.
To actually get useful updates that significantly change the ratios between ideas established by your priors, and let you move away from your priors making much difference, I think you need not only indecisive arguments but indecisive theories which make probabilistic claims. If one theory says “there is an 80% chance I’ll observe a cat today” and another says “there is a 20% chance I’ll observe a cat today” then observing or not observing a cat today can favor one theory over the other and that can influence your conclusions more than your priors do.
My take is most of our ideas in most fields aren’t like that, aren’t that statistical, so Bayesianism does a poor job of learning its way to the point that bad priors have a small impact. Also, stronger claims like “There is a 100% chance I’ll observe a cat today” tend to dominate over the hedged claims – some of the stronger claims lose but the ones that match all the evidence dominate, so the focus ends up being on the really strong claims which are the ones where probabilities are affected most by priors.
If I’m missing something, I haven’t been able to find out what it is over the years.
I haven’t seen any essays/analyses about effective updates (that cause theories with lower prior probability to pass theories with higher probability while neither theory goes near 0% or 100%) vs ineffective updates (where the probability ratios of the non-refuted theories stay about the same as they were in your priors).
“ineffective” updates are always effective in some sense or for something. If you rule out theory C, then the update is effective re C. but if A and B are both non-refuted, it may be ineffective relative to A and B. and the thing I’m interested in is updates which are effective re theories that both start (before the update) and end (after the update) at probabilities that are not approximately 0 or 1. so e.g. A is at 0.2 and B at 0.3, but after the update, A is at 0.4 and B is at 0.35. That would be an effective update because the probability ratios of A and B changed and neither one is at approximately 0 or 1. A doesn’t have to pass B for it to be effective. If A went to 0.25 while B went to 0.31 that’d be effective too. Or if one goes up and the other goes down, that’s effective.