Every post, comment, or Wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
The board’s behavior was grossly unprofessional
You had no evidence to justify that claim back when you made it, and as new evidence is released, it looks increasingly likely that the claim was not only unjustified but also wrong (see e.g. this comment by Gwern).
Does anyone know where I can find something like that?
You can take a look at the ‘Further reading’ section of criticism of effective altruism, the articles so tagged, and the other tags starting with “criticism of” in the ‘Related entries’ section.
I'm not sure how to react to all of this, though.
Kudos for being uncertain, given the limited information available.
(Not something one cay say about many of the other comments to this post, sadly.)
I still tend to agree the expected value of the future is astronomical (e.g. at least 10^15 lives), but then the question is how easily one can increase it.
If one grants that the time of perils will last at most only a few centuries, after which the per-century x-risk will be low enough to vindicate the hypothesis that the bulk of expected value lies in the long-term (even if one is uncertain about exactly how low it will drop), then deprioritizing longtermist interventions on tractability grounds seems hard to justify, because the concentration of total x-risk in the near-term means it's comparatively much easier to reduce.
Hi Vasco,
I can see the above applying for some definitions of time of perils and technological maturity, but then I think they may be astronomically unlikely.
What do you think about these considerations for expecting the time of perils to be very short in the grand scheme of things? It just doesn't seem like the probability of possible future scenarios decays nearly fast enough to offset their greater value in expectation.
Semi-tangential question: what's the rationale for making the reactions public but the voting (including the agree/disagree voting) anonymous?
Here’s an article by @Brian_Tomasik enumerating, and briefly discussing, what strike me as most of the relevant considerations for and against publishing in academia.
The ‘citability’ consideration also applies to Wikipedia, which requires that all claims be supported by “reliable sources” (and understands that notion quite narrowly). For example, many concepts developed by the rationalist and EA communities cannot be the subject of a Wikipedia article merely because they have not received coverage in academic publications.
Cool. I'd be interested in tentatively providing this search for free on EA News via the OpenAI API, depending on monthly costs. Do you know how to implement it?