In a Reddit thread where the OP was complaining about how the ethics folks only criticize & never propose solutions, one ally accused him of not having actually read the literature where they propose solutions. I LOLed & asked for just one example but got nothing.
I see examples like this regularly.
I think it’s important to have a good sense of what the world is like, e.g. that people often make claims – including factual claims about literature they claim they read and you didn’t – that they cannot and will not back up with actual sources or examples.
And that isn’t just some idiots on reddit. You can’t get the “top people” in fields like AI Ethics or many others to actually engage in halfway reasonable debate or scholarship.
The fraud was pretty blatant. They duplicated the initial data set to double their sample size. The duplicates had 0-1000 randomly added and were written in a different font (lol). Then the second data set (meant to be updates later) was generated by randomly adding 0-50000 to each entry in the initial data.
They were caught because they posted their data online which allowed it to be analyzed. If they just published the same paper and kept their data secret, it may not have been possible to catch a blatant fraud like this. It appears that only one author of the paper was involved in the fraud, which may explain why the fraudulent data was shared. Presumably the other authors wanted to share data and thought it was legit, and the fraudster didn’t know in advance that they would do that and couldn’t talk them out of it without revealing his fraud.
Oh, reading more, four authors wrote responses. Apparently some of them did a second paper in 2020 (original in 2012) and found some potential issues with the data which is why they posted the full data for the 2012 paper publicly. Originally that data wasn’t shared.
The co-author involved with the fraudulent data says the insurance company sent him fraudulent data, so it’s not his fault.
All 4 authors who commented said basically that they agree the data was falsified, are sorry they didn’t look at it more closely for problems in 2012, that their only error was trust and inadequate vigilance, and that the paper should be retracted.
Note: People put wasted effort into this due to the fraud. There were six failed replications of the original study. The paper was apparently popular or influential.
others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),
You might therefore expect that source (free online) to contain the word “credence”. It does not.
What about the word “probability”? No. There are two results searching “proba” and neither says anything like that we should use beliefs about probabilities rather than credences (which are degrees or probabilities of belief).
It’s hard to look up academic philosophy positions because the cites are lies. They just send you to a 20 page text, provide no specific quote or page number, and it has nothing to do with the topic they cited it for.
I was concerned after seeing the title – “Perception and Conceptual Content” – and reading the introductory section. It didn’t look relevant. So I skimmed and searched, but I’m not willing to read the whole 20 pages just in case there’s a relevant aside somewhere. I was expecting a text that’s actually on the topic it was cited for. I did look at the ending, which is about the same topic as the start and the middle bits I skimmed.
If you’re going to cite a whole text without specifying a section/passage/page/quote, the whole text in general ought to be relevant to what you’re citing it for (or failing that it at least should be easy to find the right part, e.g. by reading the table of contents and seeing one section is about the right topic). Even if there was a relevant paragraph or two somewhere in the 20 pages, which used different terminology, the cite would still be awful.
This reminds me a bit of cheating in speed-running (and related runs).
The common solution (streaming) is prevalent enough that offline runs can be accused of cheating b/c they’re done offline[1]. The equiv with academia might be default assumptions of academic fraud if data isn’t public. That comparison is based on the prior default being – for academia: no public data, just conclusions; and for speedrunning: proof provided but was easily faked.
A conclusion like “speed running (and related stuff) have more intellectual integrity than academia” doesn’t feel unreasonable there.
Note: I think default assumptions like that are bad for ~speed-running. but, since academia should have higher standards, it’s okay there (or at least ppl should be rightly skeptical). And reasonable exceptions can be made where appropriate, tho 3rd party reviews (like software audits, where conclusions are published) can help with that case.
Also, the original paper (about dishonesty) was influential enough that I knew about it – and have for a while, I don’t remember where I heard it. I did study some 2nd year psychology (at uni) around that time, so I might know about it b/c of that.
I just told an academic about a math error in their paper and they said it’s too late to correct it! Academia sucks at error correction.
I’m unclear on how the error got through editing let alone peer review (incompetence all around?).
They wrote:
necessary truths such as 567 + 123 = 69,741
The same math equation was in the paper four times. Every time it said + not * (it works as multiplication).
Can’t people see at a glance that adding 2 3-digit numbers will not result in a 5-digit answer? I can. But the authors and reviewers can’t?
The error is actually confusing. When I first read it, I assumed the math was intended to be false and was trying to figure out how to make sense of that in the paper.
The first few minutes cover some massive conflicts of interest and lies in a prestigious (I think) journal. The author of an article claiming lab-leak was a conspiracy was actually involved in funding relevant research in Wuhan. Also the journal had received funding from the CCP.
Based on the title and thumbnail, at first I figured the bad scholarship would be in the video rather than critiqued by the video. After clicking through, I see it’s on a decent channel though. The title and thumbnail pandering is misleading though.
One thing I just realized: a lot of papers are only accessible through preprints – like, if a paper was accepted into a journal and ends up behind a paywall, but a preprint is available, won’t ppl just use the preprint (university journal portals aside).
The problem with that is that even if an error was in the preprint and fixed before being published in a journal, the error can still propagate via the (more accessible) preprint. So journals both restrict access to the (potentially) higher quality doc + don’t prioritize taking errors seriously.
This is happening increasingly with their lectures - one of my close friends did a great operating systems course with amazing lectures and videos → stoled.jpg :(
And there’s a huge legal issue as well that’s brewing up because of this :\
This post criticizes the idea that Roman soldiers were paid in salt or received an allowance for buying salt and that’s where “salary” comes from.
Haven’t scrutinized it super closely but this part criticizing Wikipedia jumped out at me:
The trouble with citing Pliny as a source for the myth is of course that Pliny doesn’t say anything of the kind. The problem is exacerbated by Wikipedia, which bald-facedly re-writes Pliny, and has been quoted very widely:
the Roman historian Pliny the Elder, who stated as an aside in his Natural History’s discussion of sea water, that ‘[I]n Rome…the soldier’s pay was originally salt and the word salary derives from it…’.
This is a mistranslation, just to be clear. And this wording doesn’t even appear in the linked source. And Pliny isn’t writing about sea water, but about salt itself. None of that has stopped this fake quotation being repeated in countless books and websites.
Note, 18 Jan.: this error, and the other Wikipedia excerpt quoted above, have since been corrected. However, some other parts of the articles are still inaccurate: see below.
I wrote about bad scholarship in Atomic Habits. It got the etymology of a word wrong and contradicted its own source. Linking it here for reference. Self-Help Books - #8 by Elliot
It is a calmer Stewart [on his 2021 TV show] than during his famous diatribe on Crossfire in 2004, during which he tore into his rightwing blowhard interviewers Tucker Carlson and Paul Begala
Crossfire was a show about the left and right fighting – which was one of Stewart’s main criticisms of it. It had two hosts so they could represent both the right and left. Paul Begala was the leftist. Stewart was making a non-tribalist criticism of the fighting between the left and right, and how the media encourages it. But The Guardian misrepresents that as Stewart having made a tribalist attack on two rightwing blowhards, not on one leftist and one rightist. Even if they weren’t liars (or misinformation dispensers or whatever), The Guardian would still be part of the problem that Stewart was criticizing.
Also I’m not well-versed enough in this stuff that I should be catching The Guardian out. I don’t think I’ve ever watched an episode of Crossfire. I had no idea who Paul Begala is – I just remembered the basic concepts of Crossfire, and of Stewart’s criticism, enough to be suspicious and look it up. Also I thought Stewart was pretty calm on Crossfire. Also “diatribe” makes it sound like Stewart gave one mean speech when actually he interactively asked some questions and made a few separate short points.