Academic Epistemology

Topic for discussion of academic papers in epistemology.

I’m sharing for comments a paragraph from Radical epistemology, structural explanations, and epistemic weaponry by Richard Pettigrew. PDF. The context is trying to understand what justifies beliefs as knowledge.

The debate between these two camps is often set up with internalism as an extreme position that says that only internal states of a subject are relevant to whether their beliefs are justified, while externalism is just the negation of internalism, covering any view that takes any external state to be in any way relevant. But I think it’s more illuminating to array the various positions across a landscape that has extreme internalism at one edge of the map, extreme externalism at the opposite edge, and all other positions spread out in between them. Where you lie in this landscape is determined by the extent to which the conditions that suffice for and are required for justification concern matters internal to the subject who has the belief and the extent to which they concern matters external to them. Perhaps the most extreme internalist position says that whether or not you are justified in believing something at a given time depends only on those of your mental states that occur at that time and that you actually access at that time. And the most extremely externalist position says that a belief is justified just in case it is true. In between, we have a large array of positions. We have what Conee and Feldman (2001) call mentalism, which allows mental states that are not only not actually accessed but also not accessible to help determine whether a belief is justified. And we have process and indicator reliabilism, both of which appeal to an internal state—the belief-forming process, or the grounds of the belief—but consider mainly external features of those internal states—the proportion of the beliefs that the process produces that are true, or the objective probability that the belief is true given that the subject has that ground for it (Goldman, 1979; Alston, 1988). And there are many more.

In The Legend of the Justified True Belief Analysis (2015), Julien Dutant writes:

until 1950 virtually all Western philosophers were Classical Infallibilists


The New Story’s bold hypothesis is that before 1950, virtually every Western philosopher was a Classical Infallibilist. The best I can aim for here is to show that it deserves serious consideration.

He thinks basically that they had an infallibilist view of knowledge, which led to either being a dogmatist who thought we had a bunch of infallible knowledge, or else being a skeptic who thought we had little to no knowledge (due to having such a high requirement to count anything as knowledge).

This is roughly what Popper thought and said about philosophers. This seems to be independent convergence from someone ignorant of Popper. Dutant briefly mentions Popper to call him a “Probabilistic Sceptic”, and he doesn’t credit Popper for already making similar claims to what Dutant’s paper makes.

I wrote some general comments earlier today (and a few additions/edits now). I’m not that confident about them. Also, I think I’m being a bit generous for some of the points, like I’m trying too hard to map what was written to ideas I already know to make sense of it.

I’m a bit hesitant to share b/c I think I probably don’t have the skill to make any particularly useful comments, but I’ve already spent the time reading the quote and writing the following, so I don’t really see a downside to sharing.


  • This was hard to read (like, I needed to go slow) → difficult to understand and criticize → language seems sorta gate-keepy (i.e., if you don’t understand/follow then you’re not good enough to read it); the gate-keeping thing might be my biases tho
  • I’m not convinced that Pettigrew understands what he’s writing about.
  • the spectrum Pettigrew sets up seems anti-yes/no
    • makes it harder to argue that a class of ideas is wrong
    • sorta implies there’s some point on the spectrum that’s correct (which is infallibalist)
    • incompatible with multi-factor decision making math – like the position of each idea on the spectrum is based on averaging how much it’s internalist and how much it’s externalist
    • I don’t think one could put CF on there (b/c the spectrum is incompatible w/ both internal/external stuff being 100% necessary)[1]
    • the “extremely externalist position” is infallibalist if it means what I think it does. (see near end of this post)
  • I checked the PDF and was surprised that the quote is the second paragraph in the article! (No explanation of key terms, even after this paragraph, too)
    • I guess the intended audience is familiar with what “internal states of a subject” specifically means (or they think they are, at least), but those terms are never really explained / defined. how does the audience know that the author thinks the same thing they do?
  • Lots of fancy words/phrasing (note: there are fancier words in the paragraphs before and after, like “putative” and “adduce”)
  • The title is a bit wtf.
    • the title reminds me a bit of the Jan Hendrik Schön documentary (particularly about journals watching catchy titles)
    • I searched for ‘radical’ to find other references, and the 2nd hit (1st is the title) is a sentence with 77 words! (and seems like nonsense)
    • the 3rd hit is very wtf. quoted in footnote[2]
    • after reading further into the paper via the above points, the original quoted paragraph was comparatively pretty clear.

My main response to reading what I have is that it’d be a waste of my time to focus more of that article. Like, epistemology is interesting to me, but the article seems like trash.

Also, I wouldn’t be surprised if the article was written as like an attempt to contribute something to the field so that it’d be published. There is obviously an attempt to contribute something. I might be biased on this point b/c I watched how to write a philosophy article and get it published earlier today.

And the most extremely externalist position says that a belief is justified just in case it is true.

Did Pettigrew omit a the? Like: “just in [the] case it is true”.
I guess that he means something like ‘a belief is justified only if it is true’, but what’s written seems to be more like ‘it’s okay to treat a belief as justified just in case it’s true’. says:

just in case

in the event that (something happens). All right. I’ll take it just in case. I’ll take along some aspirin, just in case.

Replacing “just in case” with “in the event that” in the original quote reads okay, but my intuition is that ‘just in case X’ means roughly ‘on the off chance that X’.

  1. This is based on my understanding of what internal and external mean – my guess is that our ideas, understanding, past experiences, etc are internal, and criticism from the world (like testing a theory with an experiment) is external. if testing a theory with an experiment is internal, tho (b/c it’s based on our ideas/interpretation/etc) then it seems like everything would be internal which sorta defeats the point. ↩︎

  2. Now, epistemic goods—such as knowledge, true belief, understanding, wisdom, and evidence—are unequally and unfairly distributed within our society. This is due partly to the inequities of our education systems, the prevalence of hermeneutic epistemic injustices, and unequal access to shared evidence, public debate, and the tools for individual theorising. But it is also due to the effects of other, more local epistemic weapons. A crucial part of a radical epistemological project is therefore to develop effective defences against those weapons.


Re inequities, injustices and inequalities, here are some of Pettigrew’s references:

Collins, P. H. (2000). Black feminist thought: Knowledge, consciousness, and the politics of empowerment, 2nd edn. Routledge.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Frye, M. (1983). Oppression. In the politics of reality: Essays in feminist theory. The Crossing Press.

Mills, C. W. (2007). White ignorance. In S. Sullivan, & N. Tuana (Eds.) Race and epistemologies of ignorance (pp. 11–38). State University of New York Press.

I haven’t read the whole paper and don’t know why this stuff is in an epistemology paper.

FYI Pettigrew finished his PhD in 2008, got hired, and has a lot of papers and some books. He seems to be a successful academic and be past the early career stages.

Yeah. After your post about citations I searched for him to make sure he wasn’t a sociologist or something.

(from his site home page)

I’m a Professor in the Department of Philosophy at the University of Bristol, UK.

The other thing that occurs to me is the “publish or die” culture in academia.

I worked at the University of Sydney from 2010 to 2015 (not in academia) and during that time a “3 papers per year” policy was introduced – if academic staff didn’t meet that criteria their job was on the chopping block. One example I remember (via a friend studying literature) was an elderly professor of literature that’d been working on a tome about Proust I think – he was one of the ones on a list to axe. I think he kept his job (due to campaigning from students/colleagues) at the time but I can’t imagine that would have helped him much in the long run.
There were campus-wide protests at the time, for that and other reasons. (These protests – I participated in the first half of the first one – were a turning point for me becoming disillusioned with hard-left politics, which I was still somewhat sympathetic to at the time. I think I might have mentioned during one of the tutoring max videos.)

In hindsight, I feel lucky that I became disillusioned with studying at university. My favorite teacher in highschool (maths) had a PhD, and I’d discussed my then-goal with him (semiconductor research) and he was supportive. Another teacher, whom I respected, once told me that she could “see [me] in the halls of academia”. (I felt good and validated by those comments at the time.)

Something I remembered about the PDF, the second sentence (after the abstract) is:

While it might well be true, as Dutant (2015) claims, that few philosophers ever actually advocated the analysis of knowledge as justified true belief, […]

I went to a ‘gifted and talented’ camp in grade 9 and did the philosophy stream (the others were maths, science, and english). A professor (don’t remember who) took us through a 4-5 day course (where I learned about boolean algebra), but he specifically mentioned “justified true belief” and no other epistemology. So this quote range false for me.

Yeah I think you’re right. In that case, it also should have a “that”: “just in [the] case [that] it is true”.

  • This was hard to read (like, I needed to go slow) → difficult to understand and criticize → language seems sorta gate-keepy (i.e., if you don’t understand/follow then you’re not good enough to read it); the gate-keeping thing might be my biases tho

Yeah, academic writing style is one of the barriers to entry. I have a bunch of automatizations for reading it, but it’s still harder to read than regular writing. I don’t think any academics get it fully automatized. It being a problem for people who are already through the gate is perhaps worse than the gatekeeping aspect.

the spectrum Pettigrew sets up seems anti-yes/no

I think it’s important to slow down. There’s a key issue here. What exactly is the spectrum? Define it.

spectrum = map/landscape with 2 edges defined (each extreme), say left/right.
a spectrum is 1D, tho, but map/landscape must have up/down, too. up/down isn’t defined, and only left/right is defined (the extent to which internal/external states matter).
map/landscape implies some continuity (or at least lots of possible positions + some ordering).

left edge: the extent to which the conditions that suffice for and are required for justification concern matters internal to the subject who has the belief
right edge: the extent to which they concern matters external to them

the extent to which the conditions that suffice for and are required for justification concern matters [____] to the subject who has the belief

leftways: extent of internal factors
rightways: extent of external factors

how much (conditions for JB) depend on (factors) that are (internal/external) states of (person).

the conditions for JB are defined by each theory/position/idea.

so if each factor (of each condition) is either ‘internal’ or ‘external’, then an idea’s place on the spectrum could be something like (if between 0 and 1):

number of external factors / total number of factors

the spectrum is particularly about “the conditions that suffice for and are required for justification”, so it’s an axis doing some overall measurement of each factor of each condition, and each theory should go somewhere on it.

there’s nothing left of 0 or right of 1.

note: there are some assumptions / simplifications / shortcuts I made, like ‘internal/external mental states’ roughly meaning the same as ‘matters internal/external to the subject who has the belief’.

Also, I think there’s an assumption in there that all ideas on the spectrum/map always have the same ‘extent’ for internal/external stuff. Like what if a theory said that belief about religion was only dependent on internal states, and belief about science was only dependent on external states? I don’t think Pettigrew has a place for that on his map (except to average it out and stick it in the middle or something).

i think you’re saying the spectrum is defined by the percent of factors which are external, with each factor labelled as either internal or external.

that was my first guess.

suppose that’s correct. assume that’s what he means. then use that to evaluate the text that comes next:

Perhaps the most extreme internalist position says that whether or not you are justified in believing something at a given time depends only on those of your mental states that occur at that time and that you actually access at that time. And the most extremely externalist position says that a belief is justified just in case it is true.

The text that comes next gives 2 points: corresponding to 0 and 1.

the first point (which should be 0) is

whether or not you are justified in believing something at a given time depends only on those of your mental states that occur at that time and that you actually access at that time

This seems mostly okay – there are a bunch of factors and they’re all internal. (Though whether the person actually accesses them or not won’t change the position on the spectrum)

The second point (1) is

And the most extremely externalist position says that a belief is justified just in case it is true.

assuming “just in case it is true” means ‘only in the case that it’s true’:

For this to be the (1) point, it must be that all factors are external. which implies that whether something is true or not is always external. It seems like there’s a problem here WRT ideas about anything internal, like “I’m happy” can be true but also depends on internal states. That seems problematic but I’m not sure it’s the key issue you’re talking about.

the extreme externalism end of the spectrum we’re considering is 100% of justification is external and 0% internal. but instead of saying that, Pettigrew says

And the most extremely externalist position says that a belief is justified just in case it is true.

why the difference?

That phrase “just in case it is true” seemed problematic / suspicious but I didn’t know what to make of it (besides that it’s way vaguer than the extreme internal end).

My first guess is b/c Pettigrew has some bias or preconceived idea about it. One option is that he already thinks it’s obviously right or wrong (I’m not sure which). Maybe he sees it as circular (or otherwise obviously flawed) so is presenting it like that.

Maybe he doesn’t understand it, or can’t give a better explanation (or doesn’t want to). Thinking about Author Goals and Errors – if Pettigrew is being trialist (which the sociology references and wtf quotes are consistent with) then maybe he’s deliberately being dismissive.

He could also be saying it out of laziness; like everyone already knows what he means so why put more effort in.

WRT writing, I think it’s likely that[1] he edited the paper and got feedback on it, etc. (Maybe the missing “the” (and “that”) were errors introduced after a late edit.) So there’s a few reasons he might not have bothered writing something more specific:

  • it wasn’t an issue for reviewers, so there wasn’t much attention given to it (or none of the reviewers paid attention to it anyway)
  • it’s not relevant. if he never really engages with extreme externalism anyway, then there’s no point defining it well b/c he’ll never come back to it or reference it (or if he does, defining it simplistically like that might be useful for his argument).
  • he might be hiding that he doesn’t understand it, so avoids being specific.

Broadly speaking: Pettigrew introduces normal internalism/externalism as like extreme internalism and everything else. But then immediately says there’s a spectrum. This could be a tactic to get some of the benefits of externalism without conceding that externalism is right, he can still claim that he (and his buddies) are somewhat internalist (this works for basically any position on the spectrum).

He might also see extreme externalism as unattainable. It could be an ideal we can’t get to but can get close (sorta like objective truth in CR), or that it’s so crazy that it’s unserious / not worth dealing with.

  1. he states/implies that there were reviews/edits in the footnote on the first page. ↩︎

Instead of writing X, Pettigrew wrote Y where you’d expect X to go. Would you brainstorm reasons someone might do that? I wasn’t able to figure out reasons from your reply.


  1. RP thinks X and Y are the same
  2. RP doesn’t know what X is but thinks Y is close enough
  3. RP doesn’t know how to write down X (either succinctly or at all), so wrote Y instead
  4. RP didn’t care about writing down Y instead of X
  5. RP wants to straw-man X later, and Y helps do that
  6. RP doesn’t think the difference between X and Y is important
  7. RP wants to misrepresent X
  8. Everyone else writes Y so RP did too
  9. Everyone else thinks X=Y so RP wrote what’s expected
  10. Everyone else thinks X=Y so RP skipped explaining it
  11. RP didn’t think it was worth the effort to write X properly
  12. RP is mistaken about X and thinks Y=X
  13. RP isn’t mistaken about X=Y and skipped over explaining it
  14. RP thinks X=Y is obvious or already established
  15. RP tried writing X but it was confusing so he wrote Y instead
  16. RP’s copying something someone wrote
  17. Someone suggested RP write Y instead of X and RP did
  18. X/Y isn’t relevant to the paper so RP was lazy w/ it
  19. RP never intends to engage with X and RP thinks Y is good enough
  20. RP is trying to redefine X as Y (either broadly/long-term, or specifically for this paper)
  21. RP thinks X is unserious/stupid and that saying Y instead points that out
  22. RP doesn’t know how to engage with X but does with Y
  23. Presupposing X=Y helps RP’s argument
  24. Saying Y instead of X distances RP from extreme externalism
  25. RP doesn’t like X so wrote Y dismissively

OK cool. A good thing to do next would be to consider, for each option, would Pettigrew be doing something good or bad? You could go through all of them, or you might just determine the general outlook (most good, most bad, or mixed). If most are good or bad, then it’d be useful to find the specific cases in the minority for individual analysis.

idea # good/bad notes
1 bad trivial counterexample
2 bad dishonest academia
3 bad dishonest academia + ignoring errors
4 bad unserious writing
5 bad dishonest; motivated reasoning
6 bad unestablished; bad academia
7 bad dishonest
8 bad social + uncritical
9 bad ‘’
10 bad some contexts might be okay, but not enough references or explanation to excuse it
11 bad unserious, lazy
12 bad misrepresentation / overreaching / broken self-judgement
13 bad some contexts maybe okay, but a footnote would do in this case (e.g., ‘see my past paper: why X=Y’, or a reference)
14 bad some contexts maybe okay, but omitting the reason not okay in general (esp. b/c a paper is meant to be mostly self-contained); omitting key logic that should be easy
15 bad overreaching, dishonest
16 bad cite it, otherwise social or dishonest, etc
17 bad no explanation, social, secondhanded
18 bad a footnote could do, or reference, but non-relevance is still a case that needs to be handled
19 bad the intro sets all these things as relevant; goal of the paper isn’t to only consider a subset, but consider everything in light of some goal/purpose/context/etc
20 bad inexplicit manipulation of norms/ideas, dishonest
21 bad explain or cite it, not intro material
22 bad dishonest, misrepresentation, overreaching
23 bad dishonest, motivated reasoning
24 bad this is something that could be argued for or established critically
25 bad dishonest, passive aggressive, social, bad academia

note: ‘’ means “as above”; ‘motivated reasoning’ means that the goal/end result is already in mind, and the person is using dishonest/selective reasoning to get their goal

okay, so they’re all bad. in terms of good/bad there’s no minority.

That might be b/c I’m biased, but I think saying Y instead of X is bad in general, so I guess it’s also not much of a surprise.

one thing I tried to keep track of was “some contexts maybe okay” – like if RP’s quote were from a forum post then it’s mb reasonable not to redefine/reestablish something that was discussed earlier. Obviously RP’s paper isn’t a forum post, but I also don’t know the full context, so it seems okay to note where I think there might be ‘outs’. Those rows are a minority so mb good to analyse. I think the context of RP’s paper was pretty clear tho: it’s a standalone paper/article meant for a journal. So I don’t think this analysis would be that fruitful.

another idea is to evaluate the options how I think a normal person might (or a normal academic). Here’s a table of that. Note that for some rows I assume there’s a social in-group thing going on.

I think all these things are actually bad, but ppl sometimes/often thing they’re good. I could be wrong about some, but they seem reasonable enough.

idea # good/bad notes
5 good tribalism
8 good saves time / words
9 good easier to read, uses established ideas
10 good no point covering what we already know
13 good ‘’
14 good ‘’
16 good using existing ideas (tho mb bad b/c plagiarism)
17 good accepting feedback and contributions
18 good no need to waste our time
19 good ‘’
20 good helping the cause
24 good also distances ppl he cites (some of which he thanks in footnote 1)
25 good helping the cause; mb a joke / funny

I didn’t feel like this hit on anything particularly important. my next idea for doing stuff is to consider how much of a benefit it could be to RP – look for options that makes sense as something for him to do.

Nothing even roughly neutral?

The first candidate reason I thought of was “X and Y are equivalent”. Which is neutral sometimes, e.g. saying “guess” instead of “conjecture”.

The second candidate reason I thought of is “X implies Y” (could be deductive implication or loose, informal implication). In that case, saying Y instead of X is skipping one or more steps. Is skipping a step ever good or neutral?

1 bad trivial counterexample

Would you give the counterexample?

E.g., a belief is justified after 1960.

it’s a bit nonsense, but there’s only one factor and it’s external. If truth is a factor that can be invoked, I don’t see why anything else is off limits.

If X and Y were equivalent, and that was common knowledge, then mb it’d be neutral. I could see trivial counter-examples being ignored as like out-of-scope, too. So in that case ‘neutral’ might be okay insofar as it’s not a major error. I’m not sure it’s ever good, though.

I think this case is different from saying “guess” instead of “conjecture”, tho.
Particularly: whether X=Y is related to the topic of the paper; it is (or, might be) crucial to the core ideas (particularly the spectrum, which is stated as a crucial element). So those words matter a lot more than some other words. X=Y is also a much bigger logical jump than guess \approx conjecture.
(Also, replacing guess \leftrightarrow conjecture is only neutral if there’s overlap and excess capacity, which doesn’t seem to be the case with X=Y.)

Maybe I’m being too harsh here, but I’d see this sort of thing as a problem in my own writing.

Broadly, skipping steps can be good – building up abstractions is largely about building up methods to skip steps without mistakes (e.g., mostly I don’t break down exponentiation into multiplication, etc). And using those sort of tools makes complex ideas easier to write down and communicate.

Skipping steps can also be okay if the steps have already been established, and especially if there were already one or more examples (after the steps were established) that went through the steps.

But, skipping steps isn’t good if there’s a chance your audience doesn’t know what’s happening and also has no way to find out. It’s especially not good if you’re defining something like RP’s spectrum. If RP defined the spectrum on page 4, then replacing X with Y might be okay if it was explained earlier on. But that’s not the case here.

Maybe it’s neutral if RP explains it later on, but in that case why no footnote or forward reference? (Both are easy)

Similar to the first candidate reason, maybe it’s neutral if it’s common knowledge. But how could it be common knowledge if RP is introducing it as one extreme? If that position had a name, then that might be enough of a reference.

I think both candidate reasons are bad in this case mostly b/c it should be easy to explain X=Y, or X => Y, or referencing something that explains it.

I mean, RP goes to extra effort to include, for the defn of extreme internalism:

and that you actually access at that time

I don’t think that extra bit was established, and I’m not convinced is appropriate to include, but at least the next sentence references “what Conee and Feldman (2001) call mentalism” which is like half a reference (not exactly the same thing but seems related).

But after that specific and explicit description of extreme internalism, the extreme externalism defn is oddly lacking.

I checked RP’s paper again, and he does say

The clearest inventory of arguments for and against different versions of internalism and externalism about justified belief is found in the introduction to Clayton Littlejohn’s Justification and the Truth-Connection (Littlejohn, 2012, 1–61).

Note: I used this copy of the paper that I found searching epistemology extreme externalism – that copy looks to be the same, but it’s easier to copy-paste from than the PDF.

So, if Justification and the Truth-Connection explains it (and while I think it’s poor form not to add a cite like (Littlejohn; 2012) after the defn of extreme externalism) then mb neutral is fair.