Note: I wrote this on the weekend but am not that confident in it. I intended to review and re-edit it but I think that’s just been keeping me from posting it. I would change some stuff if I edited it now, but that’d partially be due to Alan and Elliot’s posts from a few hours ago, so I’ll leave it how it is.
Besides the whole credence function idea, No Drop seems reasonable. I can’t think of a counter-example. It reminds me of the conjunction fallacy. If the credence function is based on MFDMM and only outputs 0 and 1 then it seems to hold.
That said, it doesn’t really sit well for me. Not exactly sure why.
I did have to search for entail and entailment to figure out what those meant exactly. It means like “if A is true, then B is necessarily true”. I think something like this would be fine, too: if A entails B, then A&B is equivalent to A. (A&B meaning both A and B – in some cases this is equiv to boolean AND.) IDK if RP ever precisely defines it but it seems to be an established idea.
I read a bit of the source. RP also says that c maps to [0,1] which seems reasonable enough, but has some interesting consequences. (Also, IDK if c : \{\dots\} \to [0,1] is actually reasonable or not, why not c: \{\dots\} \to [-1,1] instead? RP says “convention” but that doesn’t seem like a good enough reason when dealing with foundational ideas.)
For the sake of discussion, RP’s A and B examples took the form:
A: Sonya is an X and a Y
B: Sonya is a Y
X and Y were chosen so that they’re independent / unrelated.
Say we assume that new supporting evidence for A always increases credence (and inversely: contradicting evidence decreases credence) – so if there’s new evidence that Sonya is an X then c(A) should increase, but c(B) should stay the same. In that case, it never makes sense to have c(B)=0 by default b/c you end up with a contradiction. This means that your degree of belief in anything and everything should always be >0. Especially if you have absolutely no evidence either way.
If you don’t want to have some degree of belief in everything by default, then you need to use the idea that supporting evidence might not increase credence, and you end up with something a bit like MFDMM. I suspect that maybe MFDMM is the only solution in this case (basically: fractional credences don’t make sense), but IDK how to prove that.
(You could also say that evidence supporting A increases the credence of A and B regardless of whether that evidence has anything to do with B, but that seems obviously broken, and getting into raven paradox territory)
I feel like there is maybe a way to force the credence function to output 0 for everything (like some construct based on an infinite number of mutually exclusive propositions), but IDK.
One thing that seems notable is that whether A entails B isn’t accounted for, it’s just taken as fact. This isn’t an issue for RP’s specific example, but I wonder how that sort of idea would be integrated in general.
Hmm, maybe this sort of thing is related to why it doesn’t sit right with me:
Using No Drop:
H: (No Drop) if A entails B, then rationality entails c(A) \le c(B)
I: rationality
G: c(A) \le c(B) (i.e., No Drop’s conclusion)
So, given H, c(I) \le c(G)?
This seems to be saying something like: we should be less confident in our methods of thinking than the ideas that our methods are based on. This doesn’t seem reasonable b/c it doesn’t take into account error checking (e.g., using redundant methods to check for the same answer) or finding a better competing theory that mostly overlaps.
“better competing theory” meaning one that mostly agrees but solves new cases too, like GR vs Newtonian Gravity.