Introduction

tl;dr: We're running a writing contest for critically engaging with theory or work in effective altruism (EA). 

Submissions can be in a range of formats (from fact-checking to philosophical critiques or major project evaluations); and can focus on a range of subject matters (from assessing empirical or normative claims to evaluating organizations and practices).  

We plan on distributing $100,000, and we may end up awarding more than this amount if we get many excellent submissions. 

The deadline is September 1, 2022. You can find the submission instructions below. Neither formal nor significant affiliation with effective altruism is required to enter into the contest.

We are: Lizka Vaintrob (the Content Specialist at the Centre for Effective Altruism), Fin Moorhouse (researcher at the Future of Humanity Institute), and Joshua Teperowski Monrad (biosecurity program associate at Effective Giving). The contest is funded via the FTX Future Fund Regranting Program, with organizational support from the Centre for Effective Altruism.

We ‘pre-announced’ this contest in March

The rest of this post gives more details, outlines the kinds of critical work we think are especially valuable, and explains our rationale. We’re also sharing a companion resource for criticisms and red teams

How to apply

Submit by posting on the EA Forum[1] and tagging the post[2] with the contest’s tag, or by filling out this form.

If you post on the Forum, you don't need to do anything except tag your post[2] with the “Criticism and Red Teaming Contest” topic, and we’ll consider your post for the contest. If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post it to the Forum (although please be mindful of copyright issues). 

We also encourage you to refer other people’s work to the contest if you think more people should know about it. To refer someone else’s work, please submit it via this form. If it wins, we may reward you for this — please see an explanation below.

The deadline is September 1, 2022.

Please contact us with any questions. You can also comment here.

Prizes

We have $100,000 currently set aside for prizes, which we plan on fully distributing.

Prizes will fall under three main tiers:

  • Winners: $20,000
  • Runners up: $5,000 each
  • Honourable mentions: $1,000 each

In addition, we may award a prize of $100,000 for outstanding work that looks likely to cause a very significant course adjustment in effective altruism.

Therefore, we’re prepared to award (perhaps significantly) more than $100,000 if we’re impressed by the quality and volume of submissions. 

We’re also offering a bounty for referring winning submissions: if you refer a winning submission (if you’re the first person to refer it, and the author never entered the contest themselves), you’ll get a referral bounty of 5% of the award.

We will also consider helping you find proactive funding for your work if you require the security of guaranteed financial support to enable a large project (though we may deduct proactive funding from prize money if you are awarded one). See the FAQ for more details.

Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.

Criteria

Overall, we want to reward critical work according to a question like: “to what extent did this cause me to change my mind about something important?” — where “change my mind” can mean “change my best guess about whether some claim is true”, or just “become significantly more or less confident in this important thing.”

Below are some virtues of the kind of work we expect to be most valuable. We’ll look out for these features in the judging process, but we’re aware it can be difficult or impossible to live up to all of them:

  • Critical. The piece takes a critical or questioning stance towards some aspect of EA theory or practice. Note that this does not mean that your conclusion must end up disagreeing with what you are criticizing; it is entirely possible to approach some work critically, check the sources, note some potential weaknesses, and conclude that the original was broadly correct.
  • Important. The issues discussed really matter for our ability to do the most good as a movement.
  • Constructive and action-relevant. Where possible we would be most interested in arguments that recommend some specific, realistic action or change of belief. It’s fine to just point out where something is going wrong; even better to be constructive, by suggesting a concrete improvement.
  • Transparent and legible. We encourage transparency about your process: how much expertise do you have? How confident are you about the claims you’re making? What would change your mind? If your work includes data, how were they collected? Relatedly, we encourage epistemic legibility: the property of being easy to argue with, separate from being correct.
  • Aware. Take some time to check that you’re not missing an existing response to your argument. If responses do exist, mention (or engage with) them.
  • Novel. The piece presents new arguments, or otherwise presents familiar ideas in a new way. Novelty is great but not always necessary — it’s often still valuable to distill or “translate” existing criticisms.
  • Focused. Critical work is often (but not always) most useful when it is focused on a small number of arguments and a small number of objects. We’d love to see (and we’re likely to reward) work that engages with specific texts, strategic choices, or claims.

We don't expect that every winning piece needs to do well at every one of these criteria, but we do think each of these criteria can help you most effectively change people’s minds with your work.

We also want to reward clarity of writing, avoiding ‘punching down’, awareness of context, and a scout mindset. We don’t want to encourage personal attacks, or diatribes that are likely to produce much more heat than light. And we hope that subject-matter experts who don’t typically associate with EA find out about this, and share insights we haven’t yet heard.

What to submit

We’re looking for critical work that you think is important or useful for EA. That’s a broad remit, so we’ve suggested some topics and kinds of critiques below.

If you’re looking for more detail, we’ve collaborated on a separate post that collects resources for red teaming and criticisms, including guides to different kinds of criticisms, and examples. If you’re interested in participating in this contest, we highly recommend that you take a look. (We’d also love help updating and improving it.)

It’s helpful —but not required — to also suggest 1–3 people you think most need to heed your critique. For many topics, this nomination is better done privately (contact us, or submit through the form). We’ll send it their way where possible. (If you don’t know who needs to see it most, we’ll work it out.) 

Formats

You might consider framing your submission as one of the following:

  • Minimal trust investigation — A minimal trust investigation involves suspending your trust in others' judgments, and trying to understand the case for and against some claim yourself. Suspending trust does not mean determining in advance that you’ll end up disagreeing.
  • Red teaming ‘Red teaming’ is the practice of “subjecting [...] plans, programmes, ideas and assumptions to rigorous analysis and challenge”. You’re setting out to find the strongest reasonable case against something, whatever you actually think about it (and you should flag that this is what you’re doing).
  • Fact checking and chasing citation trails — If you notice claims that seem crucial, but whose origin is unclear, you could track down the source, and evaluate its legitimacy.
  • Adversarial collaboration — An adversarial collaboration is where people with opposing views work together to clarify their disagreements.
  • Clarifying confusions — You might simply be confused about some aspect of EA, rather than confidently critical. You could try getting clear on what you’re confused about, and why.
  • Evaluating organizations — including their (implicit) theory of change, key claims, and their track record; and suggesting concrete changes where relevant.
  • Steelmanning and ‘translating’ existing criticism for an EA audience — We’d love to see work succinctly explaining these existing ideas, and constructing the strongest versions (‘steelmanning’) them. You might consider doing this in collaboration with a domain expert who does not consider themself part of the EA community.

Again, for more detail on topic ideas, kinds of critiques, and examples: visit our longer post with resources for critiques and red teams

We don’t want to give an analogous list for topic ideas, because any list is necessarily going to leave things out. However, you might take a look at Joshua’s post outlining four categories of effective altruism critiques: normative and moral questions, empirical questions, institutions & organizations, and social norms & practices

Browsing this Forum (especially curated lists like the Decade Review prizewinners, the EA Wiki, and the EA Handbook) could be a good way to get ideas if you are new to effective altruism.

If you’re unsure whether something you plan on writing could count for this contest, feel free to ask us.

Additional resources

We’ve compiled a companion post, in which we’ve collected some resources for criticisms and red teaming. 

We’re also tentatively planning on running (or helping with) several workshops on criticisms and red teaming, which will be open to anyone who is interested, including people who are new to effective altruism. We hope that the first two will be in June. If you’d like to hear about dates when they’re decided, you can fill out this form.

The judging panel

The judging panel is:

No one on the judging panel will be able to “veto” winners, and every submission will be read by at least two people. If submissions are technical and outside of the panelists’ fields of expertise, we will consult domain experts. 

If we get many submissions or if we find that the current panel doesn’t have enough bandwidth, we may invite more people to the panel. 

Rationale

Why do we think this matters? In short, we think there are some reasons to expect good criticism to be undersupplied relative to its real value. And that matters: as EA grows, it’s going to become increasingly important that we scrutinize the ideas and assumptions behind key decisions — and that we welcome outside experts to do the same.

Encouraging criticism is also a way to encourage a culture of independent thinking, and openness to criticism and scrutiny within the EA community. Part of what made and continues to make EA so special is its epistemic culture: a willingness to question and be questioned, and freedom to take contrarian or unusual ideas seriously. As EA continues to grow, one failure mode we anticipate is that this culture may give way to a culture of over-deference.

We also really care about raising the average quality of criticism. Perhaps you can recall some criticisms of effective altruism that you think were made in bad faith, or otherwise misrepresented their target in a mostly unhelpful and frustrating way. If we don’t make an effort to encourage more careful, well-informed critical work, then we may have less reason to complain about the harms that poor-quality work can cause, such as by misinforming people who are learning about effective altruism. Crucially, we’d also miss out on the real benefits of higher-quality, good-faith criticism.

In his opening talk for EA Global this year, Will MacAskill considered how a major risk to the success of effective altruism is the risk of degrading its quality of thinking: “if you look at other social movements, you get this club where there are certain beliefs that everyone holds, and it becomes an indicator of in-group mentality; and that can get strengthened if it’s the case that if you want to get funding and achieve very big things you have to believe certain things — I think that would be very bad indeed. Looking at other social movements should make us worried about that as a failure mode for us as well.”

It’s also possible that some of the most useful critical work goes relatively unrewarded because it might be less attention-grabbing or narrow in its conclusions. Conducting really high-quality criticism is sometimes thankless work: as the blogger Dynomight points out, there’s rarely much glory in fact-checking someone else’s work. We want to set up some incentives to attract this kind of work, as well as more broadly attention-grabbing work.

Ultimately, critiques have an impact by bringing about actual changes. The ultimate goal of this contest is to facilitate those positive changes, not just to spot what we’re currently getting wrong.

In sum, we think and hope: 

  1. Criticism will help us form truer beliefs, and that will help people with the project of doing good effectively. People and institutions in effective altruism might be wrong in significant ways — we want to catch that and correct our course.
    1. This is especially important in the non-profit context, since it lacks many of the signals in the for-profit world (like prices). For-profit companies have a strong signal of success: if they fail to make a profit, they eventually fail. One insight of effective altruism is that there are weaker pressures for nonprofits to be effective — to achieve the goals that really matter — because their ability to fundraise isn’t necessarily tied to their effectiveness. Charity evaluators like GiveWell do an excellent job at evaluating nonprofits, but we should also try to be comparably rigorous and impartial in assessing EA organizations and projects, including in areas where outputs are harder to measure. Where natural feedback loops don’t exist, it’s our responsibility to try making them!
    2. It’s also especially important for effective altruism, given that so many of the ideas are relatively new and untested. We think this is especially true of longtermist work.
  2. Stress-testing important ideas is crucial even when the result is that the ideas are confirmed; this allows us to rely more freely on the ideas.
  3. We want to sustain a culture of intellectual openness, open disagreement, and critical thinking. We hope that this contest will contribute to reinforcing that culture.
  4. Highlighting especially good examples of criticism may create more templates for future critical work, and may make the broader community more appreciative of critical work.
  5. We also think that people in the effective altruism network tend to hear more from other people in the network, and hope that this contest might bring in outside experts and voices. (You can see more discussion of this phenomenon in "The motivated reasoning critique of effective altruism".)
  6. We want to break patterns of pluralistic ignorance where people underrate how sceptical or uncertain others (including ‘experts’) are about some claim.

Finally, we want to frame this contest as one step towards generating high-quality criticism, and not the final one. For instance, we’re interested in following up with winning submissions, such as by meeting with winning entrants to discuss ways to translate your work into concrete changes and communicate your work to the relevant stakeholders.

What this is not about

Note that critical work is not automatically valuable just by virtue of being critical: it can be attention-grabbing in a negative way. It can be stressful and time-consuming to engage with bad-faith or ill-considered criticism. We have a responsibility to be especially careful here.

This contest isn’t about making EA look open-minded or self-scrutinizing in a performative way: we want to award work that actually strikes us as useful, even if it isn’t likely to be especially popular or legible for a general audience.

We’re not going to privilege arguments for more caution about projects over arguments for urgency or haste. Scrutinizing projects in their early stages is a good way to avoid errors of commission; but errors of omission (not going ahead with an ambitious project because of an unjustified amount of risk aversion, or oversensitivity to downsides over upsides) can be just as bad.

Similarly, we don’t want this initiative to only result in writing that one-directionally worries about EA ideas or projects being too ‘weird’ or too different from some consensus or intuitions. We’re just as interested to hear why some aspect of EA is being insufficiently weird — perhaps not taking certain ideas seriously enough. Relatedly, this isn’t just about being more epistemically modest: we are likely being both overconfident in some spots, and overly modest in others. What matters is being well calibrated in our beliefs!

We would also caution against criticizing the actions or questioning the motivations of a specific individual, especially without first asking them. We urge you to focus on the ideas or ‘artefacts’ individuals produce, without speculating about personal motivations or character — this is rarely helpful.

Contact us

Email criticism-contest@effectivealtruism.com, message any of the authors of this post via the Forum, or leave a comment on this post. 

Q&A

Submissions and how they’ll be judged

  • Can I submit work I’ve already done? Yes, if it's recent. We’re accepting posts from the date of our pre-announcement (March 25, 2022) onwards.
  • Can I submit something that I got funding for already? Yes. Let us know if you have specific concerns.
  • Can I refer another person’s work? Yes. And if that person’s work wins a prize (and the author didn’t submit it themselves, and you’re the first person to refer the work), we’ll also reward you with a commission (5% of the prize). We’d love to discover work from outside the EA community that could be relevant for effective altruism. Submit referrals via this form.
  • What if I want to work on a large project for this contest that I can’t afford to carry out on my own time? Contact us. We can’t guarantee anything, but we’d like to help enable your work, by pointing you to sources of funding in effective altruism, and potentially arranging direct financial support where necessary. If we (the organizers of this contest) directly fund your work in advance, we’ll deduct whatever amount you received in advance from any potential prize that you win.
  • I have a complaint or criticism about an organization or individual, but it’s not something that’s appropriate to share publicly. You might consider contacting the CEA Community Health Team, who can advise on the next steps, including acting as an intermediary. You can also send them an anonymous message.
  • Can I submit anonymously? Yes. You can make an anonymous account on the Forum, or you can use this form to submit without posting to the Forum.
  • Do I have to already be involved in effective altruism to submit something? No, not at all. We’re actively excited to bring in external ideas and expertise. If you’re new to the Forum, the Wiki could be a good place to start to check for what has already been written. You’re welcome to make broad criticisms of effective altruism, but focused critiques that draw on your area(s) of expertise could stand an especially good chance of being entirely novel.
  • I’d love to hear what [person who’s not engaged with effective altruism] would have to say about [some aspect of effective altruism]. How can I make that happen? If you know this person, we encourage you to reach out to them! If you’re unsure or uncomfortable about contacting them directly, let us know, and we can try getting in touch.
  • Some of the panellists belong to organizations I’d like to criticize. Isn’t that an issue? All our panellists are committed to evaluating your work on its own merit — being associated with an org or project you are criticizing should and will not count as a reason to downgrade your work. Panellists will recuse themselves if they (or we) feel that a conflict of interest will inhibit their ability to fairly evaluate a particular submission. If you’re still concerned about this or would like to request that specific panellists be recused, feel free to contact us.
  • What counts as “EA”? We have in mind the ideas, institutions, projects, and communities associated with effective altruism. You can learn more at effectivealtruism.org and here on the Forum.
  • Does the criticism or red teaming have to come to the conclusion that the original work was wrong? No. We’re very happy to award prizes to work of the form: “I checked the arguments and sources in this text. In fact, they check out. Here are my notes.”
  • Does my submission need to fulfill all the criteria outlined above? No. We understand that some formats make it difficult or impossible to satisfy all the requirements, and we don’t want that to be a barrier to submitting. At the same time, we do think each of the criteria are good indicators of the kind of work we’d like to see.

About the contest

  • How does this relate to Training for Good’s ‘Red Team challenge’? The Red Team Challenge is not this prize, and this prize is not the Red Team Challenge (RTC). The RTC is a program run by Training for Good which provides training in red teaming best practices and then pairs small teams of 2-4 people together to critique a particular claim and publish the results. We are very excited about the results of the programme being submitted to this contest! So this contest is a complement to the Red Team Challenge, rather than a substitute. Training for Good may also collaborate with us on workshops and [other resources].
  • Where’s the money coming from? The prizes will be awarded via the FTX Future Fund Regranting Program. The Centre for Effective Altruism is providing operational support (like coordination between judges). Note that the EA Forum is not sponsoring this prize, and isn't liable for it.
  • Doesn’t this penalize the people whose work is getting criticized? We want to encourage a norm where having your work fairly criticized is great news: an indication that it was trying to answer an important question. We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions. However, we are aware that being criticized is not always enjoyable, and some criticism is made in bad faith. If you’re concerned about being the subject of bad-faith criticism, let us know.
  • Does this mean that you think that non-critical work is less valuable than critical work? No. We just think that high-quality critical work is often under-rewarded and under-supplied — like many other kinds of non-critical work!

Other

  • I have another question that isn’t answered in this post. Leave a comment if you suspect others might have the same question, and we’ll try to answer it here. Otherwise, feel free to contact us.

We're extremely grateful to everyone who helped us kick this off, including the many people who gave feedback following our pre-announcement of the contest. 

  1. ^
  2. ^

    Instructions for how to tag a post are here

250

46 comments, sorted by Click to highlight new comments since: Today at 11:04 PM
New Comment

A few questions, suggestions and concerns.

Firstly, I expect people who's criticisms I'd most want to hear to be very busy, I hope the contest will consider lower effort but insightful or impactful submissions to account for this?

Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find blindspots in the particular way of thinking, arguing and knowing EA uses. What will the panelists do to ensure they can access pieces using a very different style of argument? Have you considered having non-EA panelists to aid with this?

Thirdly, criticisms from outside of EA might also contain mistakes about the movement but nonetheless make valid arguments. I hope this can be taken into account and such pieces not just dismissed.

Fourthly, I would also expect criticisms from people who have been heavily involved in EA over the years to be valuable but, if drawing on their experience, hard to write fully anonymously. What reassurances can you offer and safeguards do you have in place beyond trusting the panelists and administrators that pieces would be fairly assessed? What plans do you have in place to help prevent and mitigate backlash, especially given that many decisions within EA are network based and thus even with the best of intentions criticism is likely to have some costs to relationships.

Replying in personal capacity:

I hope the contest will consider lower effort but insightful or impactful submissions to account for this?

Yes, very short submissions count. And so should "low effort" posts, in the sense of "I have a criticism I've thought through, but I don't have time to put together a meticulous writeup, so I can either write something short/scrappy, or nothing at all." I'd much rather see unpolished ideas than nothing at all.

Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find blindspots in the particular way of thinking, arguing and knowing EA uses. What will the panelists do to ensure they can access pieces using a very different style of argument? Have you considered having non-EA panelists to aid with this?

Thanks, I think this is important.

  • We (co-posters) are proactively sharing this contest with non-EA circles (e.g.), and others should feel welcome and encouraged to do the same.
  • Note the incentives for referring posts from outside the Forum. This can and should include writing that was not written with this contest in mind. It could also include writing aimed at some idea associated with EA that doesn't itself mention "effective altruism".
  • It obviously shouldn't be a requirement that submissions use EA jargon.
  • I do think writing a post roughly in line with the Forum guidelines (e.g. trying to be clear and transparent in your reasoning) means the post will be more likely to get understood and acted on. As such, I do think it makes sense to encourage this manner of writing where possible, but it's not a hard requirement.
  • To this end, one idea might be to speak to someone who is more 'fluent' in modes of thinking associated with effective altruism, and to frame the submission as a dialogue or collaboration.
  • But that shouldn't be a requirement either. In cases where the style of argument is unfamiliar, but the argument itself seems potentially really good, we'll make the effort — such as by reaching out to the author for clarifications or a call. I hope there are few really important points that cannot be communicated through just having a conversation!
  • I'm curious which non-EA judges you would have liked to see! We went with EA judges (i) to credibly show that representatives for big EA stakeholders are invested in this, and (ii) because people with a lot of context on specific parts of EA seem best placed to spot which critiques are most underrated. I'm also not confident that every member of the panel would strongly identify as an "effective altruist", though I appreciate connection to EA comes in degrees.

Thirdly, criticisms from outside of EA might also contain mistakes about the movement but nonetheless make valid arguments. I hope this can be taken into account and such pieces not just dismissed.

Yes. We'll try to be charitable in looking for important insights, and and forgiving of innacuracies from missing context where they don't affect the main argument.

That said, it does seem straightforwardly useful to avoid factual errors that can easily be resolved with public information, because that's good practice in general.

What plans do you have in place to help prevent and mitigate backlash[?]

My guess is that the best plan is going to be very context specific. If you have concerns in this direction, you can email criticism-contest@effectivealtruism.com, and we will consider steps to help, such as by liaising with the community health team at CEA. I can also imagine cases where you just want to communicate a criticism privately and directly to someone. Let us know, and we can arrange for that to happen also ("we" meaning myself, Lizka, or Joshua).

I can't speak for everyone, but will quickly offer my own thoughts as a panelist:
1. Short and/or informally written submissions are fine. I would happily award a tweet thread it if was good enough. But I'm hesitant to say "low effort is fine", because I'm not sure what else that implies.
2. It might sound trite, but I think the point of this contest (or at least the reason I'm excited about it) is to improve EA. So if a submission is totally illegible to EA people, it is unlikely to have that impact. On "style of argument" I'll just point to my own backlog of very non-EA writing on mostly non-EA topics.
3. I wouldn't hold it against a submission as a personal matter,  and wouldn't dismiss it out of hand, but it's definitely a negative if there are substantive mistakes  that could have been avoided using only  public information.
 

A big part of my getting into EA was this debate between Oxford lefties and the baby 80k staff. The socialist/deontological case was weaker. But the points that Mills makes about systemic change and the streetlight fallacy describe the two biggest ways EA practice has changed in the last decade. We moved in his direction, despite him.

Maybe the lesson is: "even if you don't win, you might shape the movement"

I feel that external criticism of EA was generally stronger back then. Perhaps this is just a reflection of broader recent cultural trends, which have degraded the quality of public discourse.

Here is a useful steelman of Mills' critique, courtesy of 'pragmatist' (note that "earning to give" used to be known as "professional philanthropy"):

I'm not endorsing this argument (although there are parts of it with which I sympathize), but I think it is a lot better than the case for Mills as you present it in your post:

If a friend asked me whether she should vote in the upcoming Presidential election, I would advise her not to. It would be an inconvenience, and the chance of her vote making a difference to the outcome in my state is minuscule. From a consequentialist point of view, there is a good argument that it would be (mildly) unethical for her to vote, given the non-negligible cost and the negligible benefit. So if I were her personal ethical adviser, I would advise her not to vote. This analysis applies not just to my friend, but to most people in my state. So I might conclude that I would encourage significant good if I launched a large-scale state-wide media blitz discouraging voter turn-out. But this would be a bad idea! What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.

80k strongly encourages professional philanthropism over political activism, based on an individualist analysis. Any individual's chance of making a difference as an activist is small, much smaller than his chance of making a difference as a professional philanthropist. Directed at individuals, this might be sound ethical advice. But the message has pernicious consequences when directed at the aggregate, as 80k intends.

It is possible for political activism to move society towards a fundamental systemic change that would massively reduce global injustice and suffering. However, this requires a cadre of dedicated activists. Replaceability does not hold of political activism; if one morally serious and engaged activist is lured away from activism, it depletes the cadre. Now any single activist leaving (or not joining) the cadre will not significantly affect the chances of revolution succeeding. But if there is a message in the zeitgeist that discourages political participation, instead encouraging potential revolutionaries to participate in the capitalist system, this can significantly impact the chance of revolutionary success. So 80k's message is dangerous If enough motivated and passionate young people are convinced by their argument.

It's sort of like an n-person prisoner's dilemma, where each individual's (ethically) dominant strategy is to defect (conform with the capitalist system and be a philanthropist), but the Nash equilibrium is not the Pareto optimum. This kind of analysis is not uncommon in the Marxist literature. Analytic Marxists (like Jon Elster) interpret class consciousness as a stage of development at which individuals regard their strategy in a game as representative of the strategy of everyone in their socio-economic class. This changes the game so that certain strategies which would otherwise be individually attractive but which lead to unfortunate consequences if adopted in the aggregate are rendered individually unattractive. [It's been a while since I've read this stuff, so I may be misremembering, but this is what I recall.]

I feel that external criticism of EA was generally stronger back then. Perhaps this is just a reflection of broader recent cultural trends.

Maybe because EA was tiny and elite then, so only a true intellectual would bother to criticise.

Back in my day my enemies did instrumental harm like a rational person.

80k hours article on voting does not say don't vote. Do atleast link the article in your post!

https://80000hours.org/articles/is-voting-important/

pablo is quoting a 10-year-old comment; the 80k article your link was published in 2020.

I missed this, my bad.

It still invalidates the specific critique - but yeah if Pablo's point is just about "quality of critique" then this doesn't really invalidate that.

At the same time, if the shift in EA practice as claimed by you is indeed real (which I think it is), then it would also seem that EA has failed to do adequate mistake acknowledgement with respect to past critiques. This might hold some insights as to why certain forms of criticisms are by-default disincentivized.

(I do hope that this contest will make a genuine attempt to correct that disincentive landscape.)

Sounds right

The problem is, we're not an agent and so no one makes The decision to shift and so no one is noticeably responsible for acknowledging credit and blame. But it's still fair to want it.

I also suspect that making a big deal about the winners would be a good thing. For example, if the winner of the prize was awarded on the main stage at an EA Global and given a fireside chat that'd further encourage good faith criticism and demonstrate that we really care about it.

Thank you so much for your work on this, I'm excited to see what comes out of it. 

I'm interested in fleshing out "what you're looking for"; do you have some examples of things written in the past which changed your minds, which you would have awarded prizes to?

For example, I thought about my old comment on patient long-termism, which observes that in order to say "I'm waiting to give later" as a complete strategy you need to identify the conditions under which you would stop waiting (as otherwise, your strategy is to give never). On the one hand, it feels "too short" to be considered, but on the other hand, it seems long enough to convey its point (at least, embedded in context as it was), and so any additional length would be 'more cost without benefit'.

Random personal examples:

  • This won the community's award for post of the decade. Its disagreement with EA feels half-fundamental; a sweeping change to implementation details and some methods. 
  • This was much-needed and pretty damning. About twice as long as it needed to be though.
  • This old debate looks good in hindsight
  • The initial patient longtermist posts shook me up a lot.
  • Robbie's anons were really good
  • This is on the small end of important, but still rich and additive.
  • This added momentum to the great intangibles vibe shift of 2016-8 
  • This was influential, bizarrely necessary to correct a community bubble which burned a lot of time and mental health. But hardly fundamental.
  • Can't remember where it was, a Progress Studies bit about how basic science looks bad on a naive cost-benefit view but has to date clearly been the fount of utility
  • EA is (was?) ignoring criticism

 

I like your comment and would've taken it seriously, but this contest is only accepting things written after March 2022. Here's a form for older stuff (no cash yet sorry).

What percentage of the people on the panel are longtermists? It seems, at first glance, that almost everyone is, or at least working in a field/org that strongly implies they are. If so, isn't this a problem for the impartiality of the results? Even if not, how is an independent outsider (like the people making submissions) supposed to believe that? 

This is likely to have the opposite effect; it will reinforce the current thinking in EA rather than challenge it, while monetarily rewarding people for parroting back the status quo. 

The crucial complementary question is "what percentage of people on the panel are neartermists?"

FWIW, I have previously written about animal ethics, interviewed Open Phil's neartermist co-CEO, and am personally donating to neartermist causes.

I sympathise with this and generally think that EA should take conflicts of interest more seriously.

That said, I think this is subtly the wrong question: what we really want is, "how rational are the judges?" How often did they change their mind in response to arguments of various kinds from various places of various tones?

Can we say anything to convince you of that? Maybe.

Anyway: Most days I feel like more of a "holy shit x-risk" guy than a strong longtermist. I briefly worked in international development, was a socialist, a feminist, a vegan, an e2g, etc, etc. I took and liked a bunch of classes on weird things like Nietzsche, Derrida, Bourdieu. My comments on here are a good sample of me on my best behaviour.

I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?

just an appreciation comment. I think this post was very well written and handled tricky questions well, especially the Q&A section.

And this seems great to highlight:

We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions.

Maybe interesting. A friend writing a draft asked me for some posts for background.

Here are posts that came to mind from the top of my head (do suggest posts I missed):

Blindspots:

Diversity:

Policy:

Jobs:

Organisation: -https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people

It's possible I missed it but I didn't see anything stating whether multiple submissions from one author are allowed, I assume they are though?

Don't see why not, as long as it's not salami sliced.

Makes sense, thanks!

Thanks for putting this contest together! Is there a comprehensive list of major EA projects? 

Best I can think of is looking for the announcement posts inside each of these tags

https://forum.effectivealtruism.org/topics/all

Is co-authorship permitted? Apologies if I missed this in the post! 

It's permitted, yes! 

The team of coauthors who write the winning submission will get the prize, and can share it as the members see fit. A good default might be to just split the prize evenly, and if you're collaborating on something that might win a prize that you think should be distributed differently, I'd recommend that you agree on this in advance. 

(No need to apologize. I don't think we discussed co-authorship anywhere in the post. I'm now thinking we should consider adding it to the Q&A section, so thank you for bringing it up!)

One issue is that networked and connected people may have greater access to pre-publish criticism in the form of google doc comments, and getting google doc comments seems like a fairly robust strategy for improving the quality of an essay. If simply the best essays are awarded, then we may ossify some dynamics around being networked and well connected, or failing to recognize people from outside of our ingroup. 

Can someone post something and then re-post a better version that takes into account all of the feedback they got in the comments? (or should early versions not be tagged with the contest tag?)

Motivation for this question: trying to work out a low effort way for my smart[1] non-EA friends to

 1) post their thoughts in a way that feels relatively low-stakes but still has a clear upside; and

 2) give them the option to iterate on their ideas in the coming months based on anything that they find thought-provoking in the initial response.

  1. ^

    I have the good fortune of often being the least intelligent person in the room and I feel I should be making better use of this superpower 💪🏼

It's probably extremely hard to critique people who have spent 10 years steel-manning their assumptions[1] without being able to go back and forth to build up any butterfly ideas, even if there is a great critique out there.

  1. ^

(and I also am obviously not going to be nearly as good an intellectual sparring partner as the entire EA community collectively would be so it seems better to develop ideas in public than in private)

I'd be happy to see this kind of process, and don't think it's against the rules of the contest. You might not want to tag early versions with the contest tag if you don't expect them to win and don't think panelists should bother voting on them, but tagging the early versions wouldn't count against you for the final version. 

On a different note (taking off my contest-oragnizer hat, putting on my Forum hat): I think people should feel free to post butterfly ideas with the idea that they will develop them further. The Forum exists in part for this kind of communal idea development. (Of course, this isn't the best approach for certain kinds of idea development. In particular, it might make sense to do some basic research on the Forum before posting certain questions or starting to write something long on a topic you're very unsure about. 

Hello, I have written a post in response to this contest but it doesn't appear to be visible for whatever reason - net downvotes perhaps? Here is a link in case anyone is interested: https://forum.effectivealtruism.org/posts/bep6LhLcKqtEj3eLs/belonging

It’s visible but well off the front page without scrolling or pressing “more posts”.

Basically, there’s limited space and posts with low interest or “low quality” will fall off (I haven’t read your post, this isn’t judgement).

Even without positive votes, your post would have been visible for a few hours to a day. Usually, forum members will upvote posts they think deserve to be on the front page. You might not have gotten any votes.

I guess this is unfair or path dependent but basically there’s limited space and no better scheme has been clearly proposed (keeping new posts higher comes at the expense of older highly voted posts for example).

Will you consider all submissions together post 1 September, or on an ad hoc basis as and when they are received? Is there any advantage or disadvantage to posting early? I am working on something currently but am wary of submitting it early and it falling to the back of people’s minds by the time the decisions are made in September.

Are people encouraged to share this opportunity with non-EA friends and in non-EA circles? If so, maybe consider making this clear in the post?

I'm currently writing a sequence exploring the legal viability of the Windfall Clause in key jurisdictions for AI development. It isn't strictly a red-team or a fact-checking exercise, but one of my aims in writing the sequence is to critically evaluate of the Clause as a piece of longtermist policy.

If I'd like to participate, would this sort of thing be eligible? And should I submit the sequence as a whole or just the most critical posts?

Sounds to me like that would count! Perhaps you could submit the entire sequence but highlight the critical posts.

✨✨Content✨✨

Alrighty, not sure how this contest works or what is going on, but I’ve got content to add in this thread!

My content might be different because I don’t see it as “red teaming”. I think "red teaming" is criticism that tends to be opposed to the issues. While wildly aggressive, I think that I accept the underlying goals and to try to improve them systemically. For example, by finishing with constructive, specific suggestions. 

Also, I think my content is different because it's not circling the same topics (like, I don’t see anyone else writing these ideas or solutions). 

Finally, everything will be themed with Nirvana. Please play the following song (Sliver)

FYI I downvoted this and your other comment entirely because of the gratuitous pictures, videos etc.

Without directly confronting you (it's wrong and not acceptable), and writing in an impartial voice:

These pictures and videos are a deliberate comment/critique on the hidden effects of current aesthetics and norms of discussion. 

Here, your reaction is being intentionally provoked, because they are further illustrations of what the critique views as defective: the prioritization of aesthetics over content. (Any number of the points being made, about for-profit entities, alternative theories of change, seem monumental, even if half true, but "We'll downvote them because of a picture".)

Suspicion of Anthropic Silent Shadow (AKA “Sass” or "Sassy")

"PREGISTRATION OF CRITICISM" (this isn’t the full criticism or solution but I don’t know when I will type it up): 

The root issue of Sassy concerns is that a major realization of the EA interest in, and money entering AI, might be in the form of super high levels of funding to nascent entities. A major thread here is the straddling of these entities across the non-profit/for-profit boundary. Anthropic is one member of this class, but a number of other organizations are coming up. 

(A brief sketch to give an impression of this level of funding is in this comment: “Next-level Next-level”).

The consequent effects of this funding are large and include casting a shadow on all recruiting and organization formation across EA. This is still true (maybe some are increased) if this is virtuous—if EAs are recruited for example, pulling EA talent into middle managers in AI orgs. There are positive effects too, such as high talent inflows. Importantly, most of these effects are silent.

As mentioned, a major thread is that the for-profit status of these organizations. Some complications of this status are important (but cerebral): 

  • the “cost effectiveness” of these interventions could be infinitely positive
  • it introduces new theory of change of EA steering and leadership of relevant industries
  • a complete new theory of change related to TAI and takeoff, distinct from AI safety

However, the most immediate issue about for-profit status is venal. The slipperiness/porousness of straddling altruistic/profit projects, and the incentives related to this, might be bad and hard to manage.  To be clear, I am worried about situations where for-profits wielding altruistic narratives, results in bad outcomes, much worse outcomes than just having a regular for-profit. 

Regarding the amount of funding, it seems possible no situation like this exists in any non-profit ecosystem like EA in history (but we can probably find smaller instances where high quality non-profits are decapitated as their talent and processes flows to for-profits). 

Note that Sassy criticism differs from, or is even opposed to most concerns about spending. For example, it views certain concerns about “conflict of interest” as irrelevant or even misguided and counterproductive ( EAs want closely aligned EAs together in leadership positions). 

Solutions

Sass can't be "stopped" now, and probably never could have been. 

There are tangible things we can do that are robustly good:

  • Norms that involve frank communication about what people are doing when they get money or interest from EAs about these AI projects, this is good and interesting
  • A person whose explicit job is to check out what's going on (and who is funded by an endowed fund for a period of time)

Note that both the above actions don't need to have an adversarial character. Basically, it's just leaning into the reality.

 

Conflicts of interest

Note that I have 4 conflicts of interests (basically, in the wrong way, that would normally cause a sane person not to write this):

  • I am funded by the relevant parties I am directly criticizing
  • I am a wannabe working on a for-profit language model thingy (so the very thing I am writing against)
  • I seek collaboration with people inside of these entities
  • I directly use several APIs and tools from the companies and even undocumented features and aid, which can be cut off
  • Finally, in theory, I know (non-EAs) people who want to invest in these “for profit” organizations, and writing this isn’t helping that deal flow
  • My collaborators read and cringe my forum comments

No wait, that’s actually six  conflicts of interest. 

So maybe “Next-level Next-level” will actually refer to the effects on my career, which is exciting.