Hide table of contents

June 1, 2022 update: the contest is now posted. We've also compiled a separate resource for criticisms and red teaming


We’re writing this post to say that we’re interested in running a contest on the Forum for writing that critically engages with theory or work from the EA community.

We are: Lizka (the Content Specialist at CEA), Joshua, and Fin.

Consider this a pre-announcement: we’re still figuring out the details, but we want to commit right now to supporting high-quality critical work within EA — including critiques, questioning, red teaming, and minimal-trust investigations. We think a contest could make a good start.

Why do we think this matters? In short, we think there are some reasons to expect good criticism to be undersupplied relative to its real value. And that matters: as EA grows, it’s going to become increasingly important that we scrutinize the ideas and assumptions behind key decisions — and that we welcome outside experts to do the same.[1]

So we want to run a contest to incentivize and reward the most thoughtful and most action-relevant critical work (cross-)posted to the Forum.[2] The Creative Writing Contest last year would be a good point of comparison, but we’re hopeful the prize pool will be significantly larger this time — perhaps up to around $100k if enough submissions really meet the bar. The judging panel is taking shape already, and we’re looking forward to sharing it.

We’re excited to read about errors in widely-cited analyses, gaps in EA’s philosophical toolset, missing crucial considerations, independent reassessments of key claims, evaluations of EA organizations, suggestions for how to further improve community norms and institutions, and efforts to communicate and ‘steelman’ existing criticisms for an EA audience.

The judging criteria aren’t finalized, but we’re imagining an ideal submission would be:

  • Critical — the piece takes a critical or questioning stance towards some aspect of EA, theory or practice;
  • Important — the issues discussed really matter for our ability to do the most good as a movement;
  • Novel — the piece presents new arguments, or otherwise presents familiar ideas in a new way;
  • Constructive — we can see how the work is decision-relevant (directly or indirectly). Bonus points for concrete solutions.

We don't expect that every winning piece needs to do well at every one of these criteria, but we do value them all.

We also want to reward reasoning transparency / ‘epistemic legibility’, clarity of writing, ‘punching up’, awareness of context, and a scout mindset. On the other hand, we don’t want to encourage personal attacks or diatribes that are likely to produce much more heat than light. And we hope that subject-matter experts who don’t typically associate with EA find out about this and share insights we haven’t yet heard.

Should you hold off on posting until the contest is properly announced in order to ensure eligibility? You don’t have to: if we go ahead, we’ll accept pieces posted from this week (beginning March 21st, 2022) onwards.[3] 

Another reason we’re ‘pre-announcing’ this idea is to hear some initial thoughts from Forum users. We think it's especially important to get the details right here, so we’d be interested to hear what considerations we should bear in mind, that we might otherwise miss. Thanks!

 

  1. ^

    Joshua Teperowski Monrad elaborates in his post ‘Rowing and Steering the Effective Altruism Movement’. Fin Moorhouse describes the rationale for a contest in his ‘EA Project Ideas’ post. We also recommend Linch’s shortform post on ‘red teaming papers as an EA training exercise’.

  2. ^

    This should complement, rather than duplicate, the ‘Red Team Challenge’ recently announced by Training for Good.

  3. ^

    Of course, it might make sense to wait to read the full announcement and the finalized judging criteria.

Comments27
Sorted by Click to highlight new comments since: Today at 2:03 PM

Heartwarming

I'm very interested to see how this goes. I guess the main challenge with this kind of competition is finding a way to encourage high-quality criticism without encouraging low-quality bad faith criticism.

This is harder than it sounds. The stronger you disagree with someone's position, the more likely it is to appear as bad faith criticism. Indeed, most of the time you can point out legitimate flaws as everything has flaws if you look at it with a close enough microscope. The difference is that when you think the author is writing something of vital importance any flaws seem triffling, whilst when you think the author is arguing for something morally repugnant or that would have disasterous consequences the flaws scream out at you.

On the other hand, it's possible to write a piece that satisfies any objective criteria that have been set, yet still engages in bad faith.

Thanks, great points. I agree that we should only be interested in good faith arguments — we should be clear about that in the judging criteria, and clear about what counts as a bad faith criticism. I think the Forum guidelines are really good on this.

Of course, it is possible to strongly disagree with a claim without resorting to bad faith arguments, and I'm hopeful that the best entrants can lead by example.

"Clear about what counts as a bad faith criticism"

I guess one of my points was that there's a limit to how "clear" you can be about what counts as "bad faith", because someone can always find a loophole in any rules you set.

Cool! Glad to see this happening.

One issue I could imagine is around this criterion (which also seems like the central one!)

Critical — the piece takes a critical or questioning stance towards some aspect of EA, theory or practice

Will the author need to end up disagreeing with the piece of theory or practice for the piece to qualify? If so, you're incentivizing people to end up more negative than they might if they were to just try to figure out the truth about something that they were at first unsure of the truth/prudence of.

E.g. if I start out by thinking "I'm not sure that neglectedness should be a big consideration in EA, I think I'll write a post about it" and then I think/learn more about it in the course of writing my post (which seems common since people often learn by writing), I'll be incentivized to end up at "yep we should get rid of it" vs. "actually it does seem important after all".

Maybe you want that effect (maybe that's what it means to red team?) but it seems worth being explicit about so that people know how to interpret people's conclusions!

I had a very similar reaction. Here's a comment I'd previously offered in response to this idea:

I think "sceptical critiques" tend to be less good and useful, on average than "impassive, unbiased questioning". Commissioning critiques is especially awkward, because then authors have to stick with the X is bad conclusion, even if that's not where their investigation leads them. A better framing IMO is to say that you want people to question major premises, arguments, and conclusions of EA, and that you will accept pure sceptical critiques as a part of that, but that if, from investigating an issue, an author end up with a defense of an argument that fortifies EA, then that's great too.

finm
2y33
0
0

For what it's worth I think I basically endorse that comment.

I definitely think an investigation that starts with a questioning attitude, and ends up less negative than the author's initial priors, should count.

That said, some people probably do already just have useful, considered critiques in their heads that they just need to write out. It'd be good to hear them.

Also, presumably (convincing) negative conclusions for key claims are more informationally valuable than confirmatory ones, so it makes sense to explicitly encourage the kind of investigations that have the best chance of yielding those conclusions (because the claims they address look under-scrutinised).

makes sense! yeah as long as this is explicit in the final announcement it seems fine. I also think "what's the best argument against X (and then separately do you buy it?)" could be a good format.

Worth noting that the post mentions that minimal-trust investigations are in scope. From that link:

The basic idea of a minimal-trust investigation is suspending one's trust in others' judgments and trying to understand the case for and against some claim oneself, ideally to the point where one can (within the narrow slice one has investigated) keep up with experts.

I think a minimal trust investigation can end up being positive or negative. I suppose one could start off with a minimal trust investigation which could then turn into a red team if one disagrees with the generally accepted viewpoint.

I personally feel uncomfortable with a criterion being "critical" for the reasons you and others have mentioned.

Thank you, this is a really good point. By 'critical' I definitely intended to convey something more like "beginning with a critical mindset" (per JackM's comment) and less like "definitely ending with a negative conclusion in cases where you're critically assessing a claim you're initially unsure about". 

This might not always be relevant. For instance, you might set out to find the strongest case against some claim, whether or not you end up endorsing it. As long as that's explicit, it seems fine.

But in cases where someone is embarking on something like a minimal-trust investigation — approaching an uncertain claim from first principles — we should be incentivising the process, not the conclusion!

We'll try to make sure to be clear about that in the proper announcement.

I would expect there to be higher quality submissions if the team running this were willing to compile a list of (what they consider to be) all the high quality critiques of EA thus far, from both the EA Forum and beyond. Otherwise I expect you’ll get submissions rehashing the same or similar points.

I think a list like this might be useful for other purposes too:

  • raising the profile of critiques that matter amongst EAs, thus hopefully improving people's thinking
  • signalling that criticism genuinely is welcome and seen as useful

How do you plan to encourage participants outside the EA community?

This sounds great, thanks so much for making this happen!

Minor point on how you communicate the novelty point: I'm slightly worried about people misreading and thinking 'oh, I have to be super original', and then either neglecting important unoriginal things like reassessing existing work, or  twisting themselves into knots to prove how original they are.

I agree with you that all else equal a new insight is more valuable than one others have already had, but as originality is often over-egged in academia, it might be worth paying attention to how you phrase the novelty criterion in particular.
 

Possible overlap with Cillian Crosson's post here?

Yes, totally. I think a bunch of the ideas in the comments on that post would be a great fit for this contest.

Very excited that this is getting off the ground!! :) 

Genius idea to red-team (through comments that can provide thoughtful input) the red-team contest itself!

[anonymous]2y6
0
0

I think one especially valuable way to do this would be to commission/pay non-EA people with good epistemics to critique essays on core EA ideas. There are various philosophers/others I can think of who would be well-placed to do this - people like caplan, Michael Huemer, David Enoch, Richard Arneson etc. 

 I think it would also be good to have short online essay colloquia following the model of Cato Unbound

Hmmm I think it’s actually really hard to critique EA in a way that EAs will find convincing. I wrote about this below. Curious for feedback: https://twitter.com/tyleralterman/status/1511364183840989194?s=21&t=n_isE2vL3UIJsassqyLs8w

“Not being easy to criticise even if the criticism is valid” seems like an excellent critique of effective altruism.


 

Your description of practical critiques being difficult to steelman with only anecdata available feels like the classic challenge of balancing type I and type II error when reality is underpowered.

 In the context of a contest encouraging effective altruism critiques, I think we maybe want to have a much higher tolerance than usual for type I error in order to get less type II error (I am thinking of the null hypothesis as “critique is false”, so the type I error would be accepting a false critique and type II error would be rejecting a true critique). 

Obviously, there needs to be some chance that the critique holds. However, it seems very valuable to encourage critiques that would be a big deal if true even if we’re very uncertain about the assumptions, especially if the assumptions are clear and possible to test with some amount of further investment (eg. by adding a question to next year’s EA survey or getting some local groups to ask their new attendees to fill out an anonymous survey on their impressions of the group).

This makes me think that maybe a good format for EA critiques is a list of assumptions (maybe even with the authors’ credences that they all hold and their reasoning), and then the outlined critique if those assumptions are true. If criticisms clearly lay out their assumptions, even if,  say, we guess that there is a, say, 70% chance that the assumptions don’t hold, in the 30% of possible worlds where they do hold up (assuming our guess was well-calibrated :P), having the hypothetical implications written up still seems very valuable (to help us work out if it's worth investigating these assumptions further/to get us to pay more attention to evidence for and against the hypothesis that we live in that 30% world/to get us to think about whether there are low-cost actions we can take just in case we live in that 30% world). 

I'm excited about this! Thanks for working on it!

Really appreciate the pre-announcement!

I would like to be a part of this, I have extensive red teaming leadership experience.

Hi EKillian! Could you provide some more context on what you're interested in? Anyone will be welcome to write a submission. If you're more interested in helping others with their work, you could say a bit more about that here in the comments, and then perhaps someone will reach out.

In terms of serving as a judge in the competition, we haven't finalised the process for selecting judges – but it would be helpful if you could DM with some more information.

More from Lizka
Curated and popular this week
Relevant opportunities