Jonas Vollmer

I appreciate honest and direct feedback: https://admonymous.co/vollmer

I'm the Executive Director at EA Funds, based in Oxford. You can best reach me at jonas.vollmer@centreforeffectivealtruism.org.

Previously, I was a co-founder and co-executive director at the London-based Center on Long-Term Risk, a research group and grantmaker focused on preventing s-risks from AI.

My background is in medicine (BMed) and economics (MSc). See my LinkedIn.

Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)

Comments

Announcing "Naming What We Can"!

If someone writes another book about EA, it should be titled How to Be Great at Doing The Most Good You Can Do Better

A Comparison of Donor-Advised Fund Providers

You said it in the "My process" section, but not earlier.

A Comparison of Donor-Advised Fund Providers

Also, if anyone is up for it, I think a resource for DAF providers in other countries would seem useful as well

A Comparison of Donor-Advised Fund Providers

Does this apply to US only? If so, could be good to say at the very top.

(I haven't read the post, but I'm very excited that such a resource exists!)

The Long-Term Future Fund has room for more funding, right now

I don't really think there's a difference between the two: 

  • The LTFF can encourage anyone to apply. Several of the grants of the current round are a result of proactive outreach to specific individuals. (This still involves filling in the application form, but that's just because it's slightly lower-effort than exchanging the same information via email.) 
  • A donor lottery winner can only grant to individuals who submit due diligence materials to CEA, which also involves filling in some forms.
Some quick notes on "effective altruism"

I specifically wrote:

Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists.

For further clarification, see also the comment I just left here.

EA Funds has appointed new fund managers

Not yet, but I hope to publish it soon. (Sometime this year, ideally within the next few weeks.)

What are your main reservations about identifying as an effective altruist?

It's worth mentioning that during that session, we realized that some people want to keep their identity small as a general rule. For this reason, someone specifically asked the following question (paraphrased): "Who 1) has some labels ('-isms') they identify with (e.g. atheist, feminist, utilitarian) and 2) does NOT identify as 'effective altruist'?" And in response to that particular question, around half of people raised their hands. (I didn't count them – might also have been just 30% or so, but definitely a significant percentage. You might think "okay, probably those were the participants who were mainly into AI safety or rationality rather than EA" but that wasn't the case.)

Some quick notes on "effective altruism"

Thanks for clarifying – I basically agree with all of this. I particularly agree that the "government job" idea needs a lot more careful thinking and may not turn out to be as great as one might think.

I think our main disagreement might be that I think that donating large amounts effectively requires an understanding of EA ideas and altruistic dedication that only a small number of people are ever likely to develop, so I don't see the "impact through donations" route as an unusually strong argument for doing EA messaging in a particular direction or having a very large movement. And I consider the fact that some people can have very impactful careers a pretty strong argument for emphasizing the careers angle a bit more than the donation angle (though we should keep communicating both).

(Disclaimer: Written very quickly.)

I also edited my original comment (added a paragraph at the top) to make this clearer; I think my previous comment kind of missed the point.

Load More