This is a very exciting development!
In your third footnote, you write:
It might be argued that [rewarding participants for publishing analyses that move our subjective estimates significantly away from the our current views] makes the prize encourage people to have views different from those presented here. This seems hard to avoid, since we are looking for information that changes our decisions, which requires changing our beliefs.
However, an analysis that reassures you that your current estimates are correct can make your beliefs more resilient, and in turn change some of your decisions. For example, such an analysis can make you donate a larger fraction of your assets now, since you expect your beliefs to change less in the future than you did before. It can also make you less willing to run these prize contests, since they are less likely to change your views (or make them even more resilient). So I wonder if you should have instead rewarded participants for either moving your estimates significantly away from your current views or for making your current views significantly more resilient.
I think this is a valid concern. Separately, it's not clear that all s-risks are x-risks, depending on how "astronomical suffering" and "human potential" are understood.
What do you think about the concept of a hellish existential catastrophe? It highlights both that (some) s-risks fall under the category of existential risk and that they have an additional important property absent from typical x-risks. The concept isolates a risk the reduction of which should arguably be prioritized by EAs with different moral perspectives.
I would appreciate being able to answer a private message by replying to the associated email notification, like I can do with e.g. Github and Discourse.
The results are below. The data is here.
This is a plausible mechanism for explaining why content is of lower quality than one would otherwise expect, but it doesn't explain differences in quality over time (and specifically quality decline), unless you add extra assumptions such that the proportion of people with low bars to posting has increased recently. (Cf. Ryan's comment)
I'm interested in learning how plausible people find each of these mechanisms, so I created a short (anonymous) survey. I'll release the results in a few days [ETA: see below]. Estimated completion time is ~90 seconds.
I don't know about the replication crisis, but I can confirm that your comment replicates.
Note that your dollars were not replaced by FTX: you donated to Carrick's campaign, whereas FTX's donations went to a super PAC that did not coordinate with or made contributions to that campaign (it was this super PAC that spent millions in advertising). Of course, you may still be right that your donations were ineffective or even net negative, especially if your contribution resulted in increased ad spending by the campaign.
Sure—done. (I kept a link to the old course, since I thought it would also be of interest.)