The LW/EA community is a mess.
Recently I was told one of EA’s advantages, related to its rationality, is not needing to waste much effort on good governance, anti-corruption, etc. I disagreed.
Now that FTX has blown up there are posts like:
Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations:
- Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009.
- Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019.
- Inadequate financial record keeping at 80,000 Hours during 2018.
- Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017.
- Similar problems reported at the EA-adjacent organisation Leverage Researchduring 2017-2019.
- ‘Loose norms around board of directors and conflicts of interests between funding orgs and grantees’ at FTX and the Future Fund from 2021-2022.
Those claims are all sourced. I clicked on the MIRI/CFAR one:
While most people around MIRI and CFAR didn’t have psychotic breaks [like I did], there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR
And two suicides, and a car hijacking.
And when the people working on AI safety got worried that AI would be invented sooner:
Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics. MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.
They are idiots with no idea how to run an effective organization.
there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms)
And
I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said “yes”.
And
I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead)