I strongly downvoted this for not making any of the reasoning transparent and thus contributing little to the discussion beyond stating that "Jan believes this".
This could sometimes be reasonable for the purpose of deferring to authority, but that is riskier in this case because Jan has severe conflicts of interest due to being employed by a core EA organisation and being a stakeholder in for example a ~$4.7 million grant to buy a chateau.
Could someone explain in more detail, or give examples, what it looks like when direct work at an organisation like CEA is more valuable than donating $2M per year? What factors make someone "$2M in donations"-better than the next best alternative (who isn't switching from e2g to direct work)? What's the analysis behind these claims?
What if the nuclear bomb was not developed until after Stalin’s death on the 5 March 1953? The prospects for international controls on the development, stockpiling and use of nuclear weapons may have been much improved.
The possibility of better weapon governance (with what impact?) in exchange for an increased risk of Nazi, USSR, or Japanese dominance during a total war seems like a bad tradeoff.
How would the strategy of delaying development have been pitched during a total war? How would the development have been done instead? It's hard to imagine the counterfactual here.
In the future different types of rewards could probably improve results of initiatives like this. Currently the small chance of big rewards for major commitments very strongly selects for people who can afford to commit a large amounts of personal risk and time to it.
Blogging is also extremely long-tailed in impact (=vast majority of blogs have no readers), so ultimately it seems that this sort of reward selects for people who A) can afford to spend significant time on writing, and B) consider it OK to spend time pursuing a prize with an activity that is very likely to have no impact.
The way blogging and more generally writing usually seem to work, is that people do it well due to intrinsic motivation. It is hard to pay directly for quality content, especially if you want it to continue independently of financial incentives.
As an alternative, giving many small rewards with little uncertainty for the recipients, would result in many people trying blogging, without so many adverse selection effects. Most of the participants would probably not continue blogging, but it would increase the absolute number of people who try blogging, and through that increase the odds of finding great bloggers who would've otherwise not blogged.
In more general form, it seems that this sort of prizes would work better to motivate tasks that people are already doing, as a way to increase their commitment and quality. For example, prizes for research, or retrospective blogging prizes.
I'm worried about politics. I'm worried that Effective Altruists will waste resources, alienate moderates, and make enemies by participating in partisan politics.
When I've seen EAs write against Trump, the writings have been superficial and lacking in empathy. The most extreme even suggest campaigning against Trump as effective altruism - as more impactful than anything GiveWell or anyone else has recommended.
The claim that one political candidate is comparable to existential risks is extraordinary, and should require extraordinary evidence as well. That such significant but poorly argued claims are being made by people at the forefront of EA is worrying. Such claims can be very harmful; they may redirect donations to highly uncertain and inefficient causes, they create political enemies, and they alienate apolitical altruists. (Admittedly, partisan politics can also result in new allies.)
My hope, and suggestion would be to avoid any and all political claims, unless you have already done extensive research, and have enough evidence to convince even some people on the "other side". The lack of argument and evidence in current discourse is simply worrying proof of political bias.