Elliot Discusses with Effective Altruism (retitled from: Rational Debate Methodology at Effective Altruism?)

EDIT/UPDATE: Find all my EA related stuff at Curiosity – Effective Altruism Related Articles


I asked Effective Altruism (EA) whether they have rational debate methodology.

I have some agreements and disagreements with EA. I’m considering potential discussion. I have questions about that.

Is there a way here to get organized, rational debate following written methodology? I would want a methodology with design features aimed at rationally reaching a conclusion.

I’m not looking for casual, informal chatting that doesn’t try to follow any particular rules or methods. In my experience, such discussions are bad at being very rational or reaching conclusions. I prefer discussion aimed at achieving specified goals (like conclusions about issues) using methods chosen on purpose to be appropriate to the goals. I’m willing to take on responsibility and commitments in discussions, and I prefer to talk with people who are also willing to do that.

I understand that You don’t have to respond to every comment is a standard attitude here. I think it’s good for that type of discussion to exist where people may ignore whatever ideas they want to with no transparency. I think that should be an available option and it’s an appropriate default. But I don’t think it should be the only type of discussion, so I’m asking about the availability of alternatives.

Some issues I would expect a debate methodology to address are:

  • Starting and stopping conditions
  • Topic branching
  • Meta discussion
  • Prioritization
  • Bias
  • Dishonesty
  • Social status related behaviors
  • Transparency
  • A method of staying organized, including tracking discussion progress and open issues
  • Replying to long messages without reading them in full
  • The use of citations and quotes rather than writing new arguments
  • When it’s appropriate to expect a participant to read literature before continuing (or study or practice), or more broadly the issue of asking people to do work
  • Handling when a participant is missing some prerequisite knowledge, skill or relevant specialization
  • Can debates involve more than two people, and if so how is that handled?
  • What should you do if you think your debate partner is making a bunch of errors which derail discussion?
  • Should messages ever be edited after being read or replied to?
  • What to do if you think a participant violates the methodology?
  • What to do if you believe a participant insults you, misquotes you, or repeatedly says or implies subtly or ambiguously negative things about you?

I asked a followup question:

I asked whether EA has any rational, written debate methodology and whether rational debate aimed at reaching conclusions is available from the EA community. The answer I received, in summary, was “no”. (If that answer is incorrect, please respond to my original question with a better answer.)

So I have a second question. Does EA have any alternative to rational debate methods to use instead? In other words, does it have a different solution to the same problem?

The underlying problem which rational debate methods are meant to solve is how to rationally resolve disagreements. Suppose that someone thinks he knows about some EA error. He’d like to help out and share his knowledge. What happens next? If EA has rational debate available following written policies, then he could use that to correct EA. If EA has no such debate available, then what is the alternative?

I hope I will not be told that informal, unorganized discussion is an adequate alternative. There are many well know problems with that like people quitting without explanation when they start to lose a debate, people being biased and hiding it due to no policies for accountability or transparency, and people or ideas with low social status being ignored or treated badly. For sharing error corrections to work well, one option is having some written policies which can be used that help prevent some of these failure modes. I haven’t seen that from EA so I asked and no one said EA has it. (And also no one said “Wow, great idea, we should have that!”). So what, if anything, does EA have instead that works well?

Note: I’m aware that other groups (and individuals) also lack rational debate policies. This is not a way that EA is worse than competitors. I’m trying to speak to EA about this rather than speaking to some other group because I have more respect for EA, not less.

I replied to a reply:

I haven’t looked into EA norms enough to answer your question, but your question makes me think the same thing that your first question did. If you have some norms to suggest or point to, then please provide some examples.

Thank you for raising this issue. I appreciate the chance the address it rather than have people think I’m doing something wrong without telling me what.

Although I do have some suggestions, I think sharing them now is a bad idea. They would distract from the conceptual issue I’m trying to discuss: Is there a problem here that needs a solution? Does EA have a solution?

I guess your perspective is that of course this is an important problem, and EA isn’t already solving it in some really great way because it’s really hard. In that context, mentioning the problem doesn’t add value, and proposing solutions is an appropriate next step. But I suspect that is not the typical perspective here.

I think most people would deny it’s an important problem and aren’t looking to solve it. In that context, I don’t want to propose a solution to what people consider a non-problem. Instead, I’d rather encourage people to care about the problem. I think people should try to understand the problem well before trying to solve it (because it is pretty hard) so I’d rather talk about the nature of the problem first. I think the EA community should make this an active research area. If they do, I’ll be happy to contribute some ideas. But as long as its not an active research area, I think it’s important to investigate why not and try to address whatever is going on there. (Note that EA has other active research areas regarding very hard problems with no solutions in sight. EA is willing to work on hard problems, which is something I like about EA.)

I also wouldn’t want to suggest some solutions, which turn out to be incorrect, at which point people don’t work on better solutions. It would be logically invalid to dismiss the problem because my solutions were wrong; but it also strikes me as a likely possibility. Even if my solutions were good, they unfortunately aren’t of the “easy to understand, easy to use, no downsides” variety. So unless people care about the problem, they won’t want to bother with solutions that take much effort to understand.

In my experience over the years posting to blogs and forums, I’ve tried a few things, but they only tested people’s patience, so I’m always looking for stuff that I could apply personally in future. Here are several ideas, some of which I’ve actually tried.

I think those ideas are fine. I’ve tried some of them too. However, if EA was currently doing all of them, I’d still have asked the same questions. I don’t see them as adequate to address the problem I’m trying to raise. Reiterating: If EA is wrong about something important, how can EA be corrected? (The question is seeking reasonable, realistic, practical ways of correcting errors, not just theoretically possible ways or really problematic ways like “climb the social hierarchy then offer the correction from the top”.)

But if you wanted to rank influence, I think EA’s are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake. EA’s are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.

Isn’t applying rationality (and evidence, science, math, etc.) to charity EA’s basic mission? And therefore, if you’re correct about EA, wouldn’t it be failing at its mission? Shouldn’t EA be trying to do much better at this stuff instead of being about as bad as many other communities at it? (The status quo or average in our society, for rationality, is pretty bad.)

You don’t think that would address problems of updating in EA to some extent?

Do I think those techniques would address problems of updating in EA adequately? No.

Do I think those techniques would address problems of updating in EA to some extent? Yes

The change in qualifier is an example of something I find difficult to make a decision about in discussions. It’s meaningful enough to invert my answer but I don’t know that it matters to you, and I doubt it matters to anyone else reading. I could reply with topical, productive comments that ignore this detail. Is it better to risk getting caught up in details to address this or better to try to keep the discussion making forward progress? Ignoring it risks you feeling ignored (without explanation) or the detail having been important to your thinking. Speaking about it risks coming off picky, pedantic, derailing, etc.

In general, I find there’s a pretty short maximum number of back-and-forths before people stop discussing (pretty much regardless of how well the discussion is going), which is a reason to focus replies only on the most important and interesting things. It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.

In this case, the original topic I was trying to raise is discussion methodology, so replying in a meta way actually fits my interests and that topic, which is why I’ve tried it. This is an example of a decision that people face in discussions which a good methodology could help with.

My contest submission

Sounds interesting. Link please.

How Is EA Rational?

I asked if EA has a rational debate methodology in writing that people sometimes use. The answer seems to be “no”.

I asked if EA has any alternative to rationally resolve disagreements. The answer seems to be “no”.

If the correct answer to either question is actually “yes”, please let me know by responding to that question.

My questions were intended to form a complete pair. Do you use X for rationality, and if not do you use anything other than X?

Does EA have some other way of being rational which wasn’t covered by either question? Or is something else going on?

My understanding is that rationality is crucial to EA’s mission (of basically applying rationality, math, evidence, etc., to charity – which sounds great to me) so I think the issue I’m raising is important and relevant.

In response to a private message about bad faith impasse claims (re my debate policy involving impasse chains), I wrote:

People can definitely make bad faith impasse claims. But if they have to make several bad faith claims in a chain, and put those in public writing, then it’s harder than normal to hide bad faith. When people give up on agreeing, I’d like more visibility into what they think is blocking progress. Increasing transparency and accountability lets debate results more accurately affect people’s reputations. I’d like reasonable observers to have more information for judging who (if anyone) is being unreasonable. And people who have integrity, or care about their reputation, will try not to make bad faith claims.

UNUSED DRAFT TEXT: Bad faith is easier to hide if you can just go silent with no explanation, say “I’m busy”, say “I’m not interested”, etc., and leave it at that. My policies involve, for some conversations, establishing at the outset availability, interest and agreeing to not quit without explanation. When explaining quitting, the impasse chain method makes the explanation involve some back-and-forth discussion so there can be clarifying questions, attempts to address problems before giving up, etc. One of my goals is to have more visibility into what’s going on and why the discussion is ending. In other words, I want (some) discussions to reach a conclusion, end in a way people agree to, or have public visibility about how/why they end. At least one of those.

I wrote:

Thanks for the list; it’s the most helpful response for me so far. I’ll try responding to one thing at a time.

Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.

I think you’re saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you’re not sure structured debate would offer a large improvement.

I think one of EA’s goals is to persuade non-EAs of various ideas, e.g. that AI Safety is important. Would a structured debate method help with talking to non-EAs?

Non-EAs have fewer shared norms with EAs, so it’s harder to rely on norms to make debate productive. Saying “Please read our rationality literature and learn our norms so that then it’ll be easier for us to persuade you about AI Safety.” is tough. Outsiders may be skeptical that EA norms and debates are as rational and non-adversarial as claimed, and may not want to learn a bunch of stuff before hearing the AI Safety arguments. But if you share the arguments first, they may respond in an adversarial or irrational way.

Compared to norms, written debate steps and rules are easier to share with others, simpler (and therefore faster to learn), easier to follow by good-faith actors (because they’re more specific and concrete than norms), and easier to point out deviations from.

In other words, I think replacing vague or unwritten norms with more specific, concrete, explicit rules is especially helpful when talking with people who are significantly different than you are. It has a larger impact on those discussions. It helps deal with culture clash and differences in background knowledge or context.

I wrote:

You raise multiple issues. Let’s go one at a time.

I didn’t write the words “rational dispute resolution”. I consider inaccurate quotes an important issue. This isn’t the first one I’ve seen, so I’m wondering if there’s a disagreement about norms.

Normally I’d only mirror writing where I try to explain an idea. I’m sharing this one because I think the misquote problem is notable. It’s a recurring theme in my life. Anti-misquotes is a strong norm I’ve pushed on my forums. And it’s a weird issue to me. People don’t admit to holding beliefs like “misquotes are fine” but they seem to act like they believe that. It’s hard to argue with them because they don’t make pro-misquote claims or give arguments in defense of misquotes.

I wrote:

Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?

So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.

Following written debate methods is one way to reduce the impact of bias and irrationality. I might be very biased but not find any loophole in the debate rules that lets my bias win. Similarly, transparency policies help reduce the impact of bias – when I don’t have the option to hide what I’m doing, and I have to explain myself, then I won’t take some biased actions because I don’t see how to get away with them (or I may do them anyway, get caught, and be overruled so the problem is fixed).

We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies. We should limit our reliance on personal rationality and integrity. Explicit rules and policies, and other constraints against arbitrary action, help with that.

I got a reply about misquotes and wrote:

Why are mistakes within quotations expected and fine? What processes cause good faith mistakes within quotations, particularly when the quote can be copy/pasted rather than typed from paper? (I think typed in quotes should be double checked or else should have a warning.) I was hoping that EA might have high standards about accuracy. I think that’s important rather than seeing an avoidable type of misinformation as merely expected and fine.

Our culture in general holds quotations to a higher than normal standard, partly because misquotes put words in other people’s mouths, so they’re disrespectful and violating, and partly because they’re false and you can just not do it. I was hoping EA, with its focus on rationality, might care more than average, but your response indicates caring less than average.

When a quote can be copy/pasted, what good faith processes result in wording changes? I think a norm should be established against editing quotes or typing paraphrases then knowingly putting quote marks around them. I don’t understand why so many people do it, including book authors, or what they’re thinking or what exactly they’re doing to generate the errors.

I replied a second time about misquotes:

Wait, I was checking the quotes you gave and the third is taken out of context and misleading. It’s the kind of thing that IMO would make a book fail a fact check. The original sentence is:

Misgendering deliberately and/or deadnaming gratuitously is not ok, although mistakes are expected and fine (please accept corrections, though).

It did not say that mistakes are expected and fine in general, it said it specifically about misgendering and deadnaming, so it’s not relevant to my question.

Did you know the text you gave referred to misgendering when you told me it was relevant? Did you read the whole sentence? Did you consider, before using a quote, whether your use fit or conflicted with the original context (as should be a standard step every time)? I don’t understand what’s going on (and I’ve run into this kind of problem for years at my own forums – and made rules against it, and given warnings that people will be banned if they keep doing it, which has reduced it a lot – but I’ve never gotten good answers about why people do it). I understand that using a quote out of context involves more of a judgment call than changing the words does, so it’s somewhat harder to avoid, but this still looks like an avoidable case to me.

I got some answers, sort of, about misquoting. The person seems to be saying that they misquote on purpose because they want to and find some kind of advantage in it, and that they believe most other people do that too, including book authors. But it’s all vague, non-literal, indirect, etc. And they won’t provide clarity about what they think is good to do on purpose and what’s an accident or mistake, and what is understandably giving in to temptation when one shouldn’t have (which is a different sort of mistake than misquoting by accident).

They followed up again. I wrote:

Thank you for replying several times and sharing your perspective. I appreciate that.

I think this kind of attitude to quotes, and some related widespread attitudes (where intellectual standards could be raised), is lowering the effectiveness of EA as a whole by over 20%. Would anyone like to have a serious discussion about this potential path to dramatically improving EA’s effectiveness?

I wrote:

I would guess your expectations of how costly it is for people to be as precise as you wish for them to be is miscalibrated, i.e. it’s significantly costlier for people to be as precise as you wish for them to be than you think/how costly it is for you. What do you think?

I think the cost/benefit ratio for this kind of accuracy is very good. The downsides are much, much larger than people realize/admit – it basically turns most of their conversations unproductive and prevents them from having very high quality or true knowledge that isn’t already popular/standard (which leads to e.g. some of EA’s funding priorities being incorrect). Put another way, it’s a blocker for being persuaded of, and learning, many good ideas.

The straightforward costs of accuracy go down a lot with practice and automatization – if people tried, they’d get better at it. Not misquoting isn’t really that hard once you get used to it (e.g. copy/pasting quotes and then refraining from editing them is, in some senses, easy – people fail at that mainly because they are pursuing some other kind of benefit, not because the cost is too high, though there are some details to learn like that Grammarly and spellcheck can be dangerous to accurate quotes). I agree it’s hard initially to change mindsets to e.g. care about accuracy. Lots of ways of being a better thinker are hard initially but I’d expect a rationality-oriented community like this to have some interest in putting effort into better thinker – at least e.g. comparing with other options for improvement.

Also, (unlike most imprecision) misrepresenting what people said is deeply violating. It’s important that people get to choose their own words and speak for themselves. It’s treating someone immorally to put words in their mouth, of your choice not theirs, without their consent. Thinking the words are close enough or similar enough doesn’t make that OK – that’s their judgment call to make, not yours. Assuming they won’t disagree, and that you didn’t make a mistake, shows a lack of humility, fallibilism, tolerance and respect for ideas different than your own, understanding that different cultures and mindsets exist, etc. (E.g., you could think to yourself, before misquoting, that the person you’re going to misquote might be a precise or autistic thinker, rather than being more like you, and then have enough respect for those other types of people not to risk doing something to them that they wouldn’t be OK with. Also if the quote involves any concept that matters a lot to a subculture they’re in but you’re not, then you risk making a change that means a lot to that subculture without realizing what you did.) Immorally treating another human being is another cost to take into account. Misquoting is also especially effective at tricking your audience into forming inaccurate beliefs, because they expect quotes to be accurate, so that’s another cost. Most people don’t actually believe that they have to look up every quote in a primary source themselves before believing it – instead they believe quotes in general are trustworthy. The norm that quotes must be 100% accurate is pretty widespread (and taught in schools) despite violations also being widespread.

There are other important factors, e.g. the social pressure to speak with plausible deniability when trying to climb a social hierarchy is a reason to avoid becoming a precise thinker even if more precise thinking and communicating would be less work on balance (due to e.g. fewer miscommunications). Or the mindset of a precise thinker can make it harder to establish rapport with some imprecise thinkers (so one way to approach that is to dumb yourself down).

Also, lots of people here can code or math, so things like looking at text with character-level precision is a skill they already developed significantly. There are many people in the world who would struggle to put a semi-colon at the end of every line of code, or who would struggle to learn and use markdown formatting rules correctly. Better precision would have larger upfront learning costs for those people. But I don’t think those kind of inabilities are what stops this forum from having higher standards.

I have a lot more I could say and the issue of raising standards as a path to making EA more effective is one of the few topics I consider high enough priority to try to discuss. Do you want to have a serious conversation about this? If so, I’d start a new topic for it. Also it’s hard to talk about complex topics with people who might stop responding, at any moment, with no explanation. That context makes it hard to decide how much to say, and hard to bring stuff up that might get no resolution or be misunderstood and not clarified.

Part of a comment I wrote:

I try to prioritize only issues that would lead to failure at active, relevant goals such as reaching agreement, rather than bringing up “pedantic” errors that could be ignored. (People sometimes assume I purposefully brought up something unimportant. If you don’t see why something is important, but I brought it up, please ask me why I think it matters and perhaps mention what you think is higher priority. Note: me explaining that preemptively every time would have substantial downsides.) In practice in the last 5 years, I frequently talk about issues like ambiguity, misquoting, logic, bias, factual errors, social dynamics, or not answering questions. Also preliminary or meta issues like whether people want to have a conversation, what kind of conversation they want to have, what conversation methods they think are good, whether they think they have something important and original to say (and if not, is the knowledge already written down somewhere, if so where, if not why not?). Some of those topics can be very brief, e.g. a yes/no answer to whether someone wants to have a serious conversation can be adequate. I used to bring those topics up less but I started focusing more attention on them while trying to figure out why talking about higher level explanations often wasn’t working well. It’s hard to successfully talk about complex knowledge when people are making lots of more basic mistakes. It’s also hard to talk while having unstated, contradictory discussion expectations, norms or goals. In general, I think people in conversations communicate less than they think, understand each other less than they think, and gloss over lots of problems habitually. And this gets a lot worse when the people conversing are pretty different than each other instead of similar – default assumptions and projection will be wrong more – so as a pretty different person, who is trying to explain non-standard ideas, it comes up more for me.

I wrote:

One way to determine the end of a debate is “mutual agreement or length 5 impasse chain”. Many other stopping conditions could be tried.

If you want to improve the debating throughput, I think you’ll want to measure the value of a debate, not just the total number of debates completed. A simple, bad model would be counting the number of nodes in the debate tree. A better model would be having each person in the debate say which nodes in the debate tree involved new value for them – they found it surprising, learned something new, changed their mind in some way, were inspired to think a new counter-argument, etc. Then count the nodes each person values and add the counts for a total value. It’s also possible to use some of the concepts without having measurements.

I wrote:

I’m not sure if all those are nodes or some refer to node groups

In general, any node could be replaced by a node group that shows more internal detail or structure. Any one idea could be written as a single big node or a group of nodes. Node groups can be colored or circled to indicate that they partly function as one thing.

what defines the links between nodes?

For conversation related trees, child nodes typically mean either a reply or additional detail. Additional detail is the same issue as node groups.

For replies, a strict debate tree would have decisive refutation as the only type of reply. You could also allow comments, indecisive arguments, positive arguments and other replies in a tree – I’d just recommend clear labelling for what is intended as a decisive refutation and what isn’t.

But your node system, did you develop these node choices from experience because you find them more helpful than some alternatives, or are they part of a formal system that you studied, or is their origin something else?

Karl Popper developed a fallibilist, evolutionary epistemology focused on criticism and error correction. He criticized using positive (supporting or justifying) arguments and recommended instead using only negative, refuting arguments. But he said basically you can look at the critical arguments and evaluate how well (to what degree) each idea stands up to criticism and pick the best one. While trying to understand and improve his ideas, I discovered that indecisive arguments are flawed too, and that ideas should be evaluated in a binary way instead of by degree of goodness or badness.

Trees and other diagrams have a lot of value pretty regardless of one’s views on epistemology. But my particular type of debate tree, which focuses on decisive refutations, is more specifically related to my epistemology.

However, I would love to learn an algorithmic process for how two debaters work from separate trees to a single combined tree, whether it uses textual outlines or tree graphics. Are you aware of something like that or does your current system allow that? It would be new to me.

It’s useful to independently make trees and compare them (differences can help you find disagreements or ambiguities) or to make a tree collaboratively. I also have a specific method where both people would always create identical trees – it creates a tree everyone will agree on. I’ve written this method down several times but I wasn’t able to quickly find it. It’s short so I’ll just write it again:

Have a conversation/debate. Say whatever you want. Keep a debate tree with only short, clear, precise statements of important arguments (big nodes or node groups should be avoided, though aren’t strictly prohibited – I recommend keeping the tree compact but you don’t necessarily have to. you can make a second tree with more detail if you want to). This tree functions as an organizational tool and debate summary, and shows what has an (alleged) refutation or not. Nodes are added to the tree only when someone decides he’s ready to put an argument in the tree – he then decides on the wording and also specifies the parent node. Since each person has full control over the nodes he adds to the tree, and can add nodes unilaterally, there shouldn’t be any disagreements about what’s in the tree. Before putting a node in the tree, he can optionally discuss it informally and ask clarifying questions, share a draft for feedback, etc. The basic idea is to talk about a point enough that you can add it to the tree in a way where it won’t need to be changed later – get your ideas stable before putting them in the tree. Removing a node from the tree, or editing it, is only allowed by unanimous agreement.

I replied:

Making decisions by adding weighted factors involves non-arbitrarily converting between qualitatively different, incommensurable dimensions. That is impossible in the general case. It’s like adding seconds with grams, which requires a conversion ratio, like 3s:2g. I made that ratio up arbitrarily but no other numbers would be better.

Decision making systems should be compared first by whether they work at all. Other issues, like how conveniently they avoid red flags or mediocrity, are secondary.

I think the discussion priorities should be, first, is there a flaw in the impossibility argument? Second, if the impossibility argument is accepted as plausible, we could discuss what’s actually going on when people appear to do something impossible.

I replied:

Has anyone written down the thing you’re proposing in detail? I haven’t seen it in MCDA or Bayesian literature before and a quick Google Scholar search didn’t turn anything useful up. Does it have a name or some standard terms/keywords that I should search? Is there any particular thing you’d recommend reading?

Would you estimate what percentage of the EA community agrees with you and knows how to do this well?

Here’s an attempt to restate what you said in terms that are closer to how I think. Do you understand and agree with this?


Convert every dimension being evaluated into a utility dimension. The article uses the term “goodness” instead of “utility” but they’re the same concept.

When we only care about utility, dimensions are not relevantly qualitatively different. Each contains some amount of utility. Anything else contained in each dimension, which may be qualitatively different, is not relevant and doesn’t need to be converted. Information-losing conversions are OK as long as no relevant information is lost. Only information related to utility is relevant.

(So converting between qualitatively different dimensions is impossible in the general case, like the article says. But this is a big work around which we can use whenever we’re dealing with utility, e.g. for ~all decision making.)

When the dimensions are approximately independent, it’s pretty easy because we can evaluate utility for one dimension at a time, then use addition.

When the dimensions aren’t independent, then it may be complicated and hard.

Sometimes we should use multiplication instead of addition.

I’m not mirroring everything.

My profile with latest comments is at Elliot Temple - EA Forum

Someone finally expressed interest in having a debate.