Jonas Vollmer

I appreciate honest feedback:

I'm the Executive Director at EA Funds, based in Oxford. You can best reach me at

Previously, I was a co-founder and co-executive director at the London-based Center on Long-Term Risk, a research group and grantmaker focused on preventing s-risks from AI.

My background is in medicine (BMed) and economics (MSc) with a focus on public choice, health economics, and development economics. See my LinkedIn.

Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)


CEA's strategy as of 2021

Very cool that you decided to share this publicly, thanks!

Donating to EA funds from Germany

Yeah, what Denis wrote sounds correct to me.

Long-Term Future Fund: Ask Us Anything!

Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it's not open access makes me a bit less excited.

EA Meta Fund Grants – July 2020

I think FHF can be argued to fall within the scope of either fund. I'm sure you saw this part of the above report:

We see this as a promising meta initiative because The Future of Humanity Foundation is aiming to leverage FHI’s operations and increase its overall impact. (FHI itself also acts as a meta initiative to some degree, because it provides scholarships, promotes important ideas through popular science books, and trains early-career researchers through its Research Scholars Programme.)

I perceive this grant to be worldview-specific rather than cause-area-specific: there are several longtermist cause areas (AI safety, pandemic prevention, etc.) that FHI contributes to. Other grants (e.g., Happier Lives Institute, Charity Entrepreneurship) are also based on particular worldviews or even cause areas, so this is not unprecedented.

In general, I think it makes sense for the EA Infrastructure Fund (EAIF) to support both cause-neutral and cause-specific projects, as long as they have a meta component and the EAIF fund managers are well-placed to evaluate the projects.

I personally actually think it's pretty unclear what the EAIF's funding threshold and benchmark should be. The GHDF aims to beat GiveWell top charities, the AWF should match/beat OP's animal welfare grantmaking, the LTFF aims to beat OP's last longermist dollar, but there's no straightforward benchmark for the EAIF given that it's kind of cause-agnostic. I plan to work with the fund managers to define this more clearly going forward. Let me know if you have any ideas.

Giving What We Can & EA Funds now operate independently of CEA

I personally am very much in favor of sharing internal documents, both to increase transparency and accountability to donors, and also to help others who are running similar projects and generally advance EA discourse. So my current plan is to publish these guidelines. That said, there's some chance I end up concluding that preventing misunderstandings and responding to questions/comments is too much work (e.g., with these guidelines, I worry that people may come away thinking we're more risk-averse than we actually are), so I'm not sure whether I'll actually publish them.

2020 AI Alignment Literature Review and Charity Comparison

Depending on how you interpret this comment, the LTFF is looking for funding as well.

(Disclosure: I run EA Funds.)

Asking for advice

I've found it difficult to find a clear takeaway from this discussion. I think relevant points are here:

  1. Making each other feel respected
  2. Finding a time that actually works well for both (i.e. not overly inconvenient times)
  3. Saving time scheduling meetings

Some of the suggestions emphasize #1 at the expense of #3 (and possibly #2). E.g., if I send my Calendly and make concrete suggestions, that removes the time-saving aspects because I have to check my calendar and there's a risk of double-booking (or I have to hold the slots if I want to prevent that).

My current guess is that the following works best: Send the Calendly link, click it yourself briefly to ensure it has a reasonable amount of options in the recipient's time zone available, and tell the recipient "feel free to just suggest whichever times work best for you."

Not sure that works for those who are most skeptical/unhappy about Calendly.

Asking for advice

Would you be fine with Claire's suggestion? This one:

Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy. 

Asking for advice

I personally have this tech aversion to Calendly and Doodle specifically, but not to other, similar tools that I find more user-friendly, such as When2Meet. The main reason is that I would much prefer a "week view" rather than having to click on each date to reveal the available slots. That said, Calendly is still my most preferred option for scheduling meetings.

Ask Rethink Priorities Anything (AMA)

How funding-constrained is your longtermist work, i.e., how much funding have you raised for your 2021 longtermist budget so far, and how much do you expect to be able to deploy usefully, and how much are you short?

Load More