I appreciate honest feedback: https://admonymous.co/vollmer
I'm the Executive Director at EA Funds, based in Oxford. You can best reach me at jonas.vollmer@centreforeffectivealtruism.org.
Previously, I was a co-founder and co-executive director at the London-based Center on Long-Term Risk, a research group and grantmaker focused on preventing s-risks from AI.
My background is in medicine (BMed) and economics (MSc) with a focus on public choice, health economics, and development economics. See my LinkedIn.
Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)
Some further, less important thoughts:
I just published this article about some potential misconceptions that may help people decide whether to apply.
It seems plausibly good for the world if this existed. But for you personally, investing in AFK (or investing conventionally and donating the higher risk-adjusted returns) might be fine. See these articles:
If you wanted to make this happen, another path to success could also be to find investors with sufficient interest, then approach a white-label ETF provider and get them to set up a fund, see here.
After looking more into this, we've decided not to evaluate applications for Community Building Grants during this grant application cycle. This is because we think CEA has a comparative advantage here due to their existing know-how, and they're still taking some exceptional or easy-to-evaluate grant applications, so some of the most valuable work will still be funded. It's currently unclear when CBG applications will reopen, but CEA is thinking carefully about this question and I'll be coordinating with them.
That said, we're interested in receiving applications from EA groups that aren't typical community-building activities – e.g., new experiments, international community-building, spin-offs of local groups, etc. If you're unsure whether your project qualifies, just send me a brief email.
I'm aware this isn't the news you and others may have been hoping for, so I personally want to contribute to resolving this gap in the funding ecosystem long-term.
Edit: Huh, some people downvoted. If you have concerns about this comment or decision, please leave a comment or send me a PM.
Some further, less important points:
Great points, I had been thinking along similar lines. I want to second the points about awkward translations, and that a lot of people don't really know what "altruism" means.
Some additional thoughts:
"Effective Altruism" sounds self-congratulatory and arrogant to some people:
"Effective altruism" sounds like a strong identity:
Some thoughts on potential implications:
Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.
It might be interesting to compare that to everyday environmentalism or everyday antispeciesism. EAs have already thought about these areas a fair bit and have said interesting things about in the past.
In both of these areas, the following seems to be the case:
EAs are already thinking a lot about optimizing #1 by default, so perhaps the project of "everyday longtermism" could be about exploring whether actions fall within #2 or #3 or #4 (and what to do about #4), and what the virtues corresponding to #5 might look like.
I think this post uses the term "Pascal's mugging" incorrectly, and I've seen this mistake frequently so I thought I'd leave a comment.
Pascal's mugging refers to scenarios with tiny probabilities (less than 1 in a trillion or so) of vast utilities (potentially higher than the largest utopia/dystopia that could be achieved in the reachable universe), and presents a decision-theoretic problem. Some discussion in Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? and Pascal's Muggle: Infinitesimal Priors and Strong Evidence. Quoting from the first of those pieces:
Yet it would also be naive to say things like “Long-termists are victims of Pascal’s Mugging.”
I think the correct term for the issue you're describing might be something like "cause robustness" or "conjunctive arguments" or similar.
That's a great suggestion, thank you. It will take me a few days to figure this out, so I expect to reply in a week or so. (Edited Sat 27 Feb: Still need a bit longer, sorry.)
Some quick thoughts:
(Thanks to some advisors who recently helped me think about this.)