A Plan to Improve the World: 20 Rational Debate Advocates

I came up with a new plan for changing the world. I’m seeking initial feedback and discussion. This is a brief summary which assumes familiarity with some of my relevant articles (particularly about topics like debate methodology and paths forward).

Plan: 20 advocates spread rational debate method ideas on the web and social media for a year. (Averaging over 3 posts per day means over 20,000 total posts.) Elliot writes relevant essays, gives some guidance, and is available for debates.

Key question: What results could we expect assuming there were 20 advocates who all did their job well? I think it’d have a significant effect on the world. Basically, not everyone would ignore these ideas, and they’re very powerful so they’d spread once some influential people or groups took interest and started doing them. Once people were having more rational discussions and debates, it would advantage good ideas (conclusions would be better on average) and people would learn more, and it’d become easier to differentiate rational thought leaders from people no one should listen to.

What ideas should be spread? In short, that intellectuals (including amateurs who want to do better) should be open to debate but currently aren’t. They should have written debate policies and methodology, transparency, anti-bias policies, and ways to be corrected on errors that other people know about. They should write some information down, and be held accountable for it, which gives critics predictability about what will get them attention and responses. They should make more organized, documented, transparent efforts not to allocate attention by social status. They should participate in organized debates, reasonably often, using either CF’s methodology with debate trees or else some alternative that is written down which they believe is better. (If someone says they’re available to debate, but no one wants an organized (methodology-following) debate with them, that’s fine too. Anyone popular would easily get some debates, but anyone who isn’t popular might not.)

Biggest advantage of this plan: Advocates don’t need to be great philosophers. They don’t need to catch up to my skill and knowledge. (This is much easier and more concrete than my old plan, which was basically that people learn to be great philosophers and then we figure out a plan after that. I thought fewer than 10 great philosophers would be enough to first make a bunch of progress, then do something that utilizes that progress. I still believe that, but this new plan is more accessible.)

What is required of advocates? Learning the CF ideas related to debate.(Besides current articles, I’d create some training materials for this and for how to do advocacy.) Being competent, smart and reasonable. Not posting anything tilted. Staying on message. Not fighting with people. Learning some basic media training skills. Being open to debate some. (Losing debates is OK but you have to do a good job, explicitly follow rational methods, and come off as reasonable.) A year commitment and being consistent not flakey. (People are welcome to be involved without any commitment or consistency. They just don’t count towards the goal of having 20 advocates.)

This plan requires low resources for a plan to make the world better. It’s efficient. Besides me, it just takes 20 volunteers spending under 10 hours per week (they can have a full time job too) for a year. Many think tanks, non-profits, hobby projects or small businesses have over 20 people, last for many years, and accomplish much less than this plan would.

I do not plan to recruit other than writing on my own websites as usual. I’ll post a CF article explaining this plan in the future. If you like the plan, you can prepare to be an advocate and/or recruit people. You can also criticize and debate the plan. If people don’t refute the plan or even claim it’s wrong, but don’t care enough enough to become advocates and recruit enough other advocates, then it won’t happen, and I will not blame myself. In that scenario, I will think that I did more than enough. I don’t expect this plan to happen soon (there are currently fewer than 20 active CF forum posters) but I wanted to give people the option; people may find it inspiring to have an easier way to do something important that requires less philosophy learning first. I know some people really care about changing the world. (I’m open to changing the world but if others don’t choose to help then I can also be content with the more indirect plan of researching and writing.) This plan offers a focus point other than just learning philosophy and could clarify the CF advocacy situation. It’s a goal that people can work towards or express disinterest in, which can be clarifying. I think the existence of the plan can make the CF community better, for many years, even if it doesn’t start. Whether it’s implemented or not, it’s good to have a plan and to have an understanding of what changing the world would involve.

There are other details but this should give people the general idea of the plan. I found it difficult to write a finalized document for the CF site and realized I should get some feedback first which will help guide me on e.g. what issues to emphasize and what objections or misunderstandings people have.

What do you like about this plan? Does it make any sense to you? What doubts or objections do you have? What questions do you have? Please discuss.

Learning to be an advocate of rational debate policy would be a goal I would enjoy working towards.

One potential issue that I’m thinking of is that many people don’t value rationality or truth-seeking. Having a rational debate policy is a solution to a problem. But if people don’t recognize themselves as having a problem in the first place, it’s going to be hard to convince them to adopt a new solution.

I think that convincing people they have a problem will be harder than convincing them that a rational debate policy is a good solution. Admitting they have a problem would involve acknowledging the way they do things is (at least partly) irrational. And people usually don’t like thinking about their own irrationality. Does that make sense?

I think one issue could be to keep the motivation and drive for a full year of 20 people.

I think that rationally spreading these kinds of ideas is hard. ET has already been spreading many of these ideas by trying to engage with people on e.g. LW, EA etc. (IIRC) with limited interest shown from other people. Rational debate advocates would have it even harder to rationally spread these ideas than ET has had, is my guess.
If this is the case, I think it might be hard for many of the rational debate advocates to stay fully motivated and driven, which I think is needed for doing a good job, (or some to even stay at it at all) for an entire year.

So addressing the risk of potential diminishing motivation and drive might be something wort thinking about and figuring out solutions for.

I think that people would be more likely to stay motivated if they felt they were making progress. Like if the number of volunteers slowly increased from 20 to 50 to 100. Or if famous intellectuals started publicly endorsing Elliot’s ideas.

There does seem to be a lot of motivated activists who care about spending time on causes. I think people will want to help us if they think we have a good plan for making a positive change in the world.

The target audience would be people who already claim to value rationality or who would easily be interested. There are plenty of people like that. Here’s an example that would be appropriate to respond to:

This thread is actually pretty good (my impression without reading the material he’s attacking to double check that he didn’t misrepresent it). It makes some reasonable points. It also explicitly discusses meta issues about debate and rationality, which isn’t needed but makes it easier to bring those topics up. Although it’s pretty good, it also had a notable error. Errors can sometimes be useful to point out (reliably finding the error is not necessary to be an advocate, but an advocate should probably be able to catch over 5% of errors like this).

Reading the thread, you can see the author already has some interest in debating and also in making debate better. But he doesn’t actually provide any clear way to point out his errors. You can reply in a tweet and probably be ignored, but he isn’t e.g. offering to debate anyone, under any conditions, on a forum about his claims. So there’s an opportunity to bring up some CF ideas that could help him do better or to challenge him to debate. There’s also an opportunity to talk about organizing debate and making a tree diagram of the AI safety debate that interests him – e.g. one could offer to collaborate on a tree with him or even (harder and not necessary) make an initial tree and show him. He also might be interested in using CF debate ideas to challenge and call out some of his current enemies – they can be a powerful tool for any side that cares to use them (with the risk that people might actually accept a debate and do better than you). Note: In general, you have to pick only one opportunity to use and focus on that. Don’t respond to the same thread with two different criticisms or suggestions.

People who claim interest in rationality are often partially faking or lying (it’s a way to brag, look smart, look rational, etc.), but I estimate over 10% of them will express some kind of further interest or take some additional steps if you get their attention, rather than having zero followup. And that’s still a lot of people.

Advocates don’t need to – and actually shouldn’t – do a lot of things I’ve done (I had different goals). Don’t write essays. Don’t be pushy. Stay on message (don’t e.g. discuss misquoting or deadnaming). Don’t be too intense – one post a week at EA for months would be more effective than saying a lot in a short time period. It can also help a ton to establish rapport with some people there instead of being seen and treated as a total outsider.

In general, one or two people are easy to ignore, and are ignored. Advocates, rather than doing anything better than me, just need to exist in larger numbers and be consistent over time and avoid fighting with people. People will listen way more if they see ideas in multiple places from multiple different people. They often need to see something, think it maybe sounds nice, then get reminded of it several times in the next few months, before they’ll start paying attention.

Instead of investing energy in explaining a lot to anyone in particular, the advocates will write situation-appropriate, customized versions of standard talking points, and will get more involved when there’s a good opportunity that isn’t really hard to use (e.g. if someone responds positively and asks a reasonable question). It’s very hard to push any particular person to be more rational. It’s much easier to play a numbers game – do a lot of inoffensive, near-zero-downside, cheap, easy posts and most of the time nothing visible will happen and that’s OK. If there’s a visible, positive response 5% of the time, that’s plenty. And for every response, dozens or sometimes even thousands of people see a message and some of them become more likely to respond (or take another step forward such as reading an article) next time.

Writing initial messages should be easy – you just bring up one standard point that you’ve written about repeatedly before and you connect it to and customize it for the thing you’re responding to. Following up should be easy because it’s normally only done when there’s a good opportunity. Hard or low probability opportunities can generally just be ignored. And finding opportunities requires regularly reading/watching stuff online which a lot of CF fans already do a ton of anyway.

One of the reasons it’s hard for me to share ideas is the gap in perspective and skill is too big. E.g. I see too many of people’s errors. It’s easier to connect with them if you can honestly have a higher opinion of them, or agree with them more, than I do.

Ok. Yeah, that makes sense to me and sounds like it might drastically improve the likelihood of success in spreading the ideas.

Edit: Finished the sentence and removed a quote. I accidentally posted before finishing the post.

I can see how that helps avoid the problem of trying to convince an irrational person to use a rational debate policy.

Would you mind sharing the error?

This would have been my guess:

But what if we don’t pre-assume that the Orthogonality Thesis is false?

It wouldn’t be a confident guess though.

That seems very low to me. I would be worried that I didn’t really understand a topic if I could only catch 10% - 20% of the errors in a discussion. I don’t think I would feel comfortable advocating for a topic until I could point out around 90% of errors with a less than 20% false positive rate.

I think that’s a good strategy. (it’s from your reply to deroj, not me)