Jonas Vollmer

I’m co-founding the Atlas Fellowship, a program that experiments with scholarships, camps, and online content for high schoolers in the US, India, and elsewhere, with Ashley and Sydney.

Previously, I ran EA Funds and the Center on Long-Term Risk. My background is in medicine (BMed) and economics (MSc). See my LinkedIn.

You can best reach me at jonas.vollmer@centreforeffectivealtruism.org.

I appreciate honest and direct feedback: https://admonymous.co/vollmer

Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)

Topic Contributions

Comments

EA is more than longtermism

The EAIF funds many of the things you listed and Peter Wildeford has been especially interested in making them happen!  Also, the Open Phil GHW team is expanding a lot and has been funding several excellent grants in these areas.

That said, I agree with the overall sentiment you expressed and definitely think there's something there.

One effect is also: there's not so much proactive encouragement to apply for funding with neartermist projects, which results in fewer things getting funded, with results in people assuming that there's no funding, even though sometimes funders are quite open to funding the kinds of things you mention.

I do think there are opportunities that GiveWell is missing, but then again I've found it hard to find grantmakers who would actually do better than them.

How many EAs failed in high risk, high reward projects?

I think some of the worst failures are mediocre projects that go sort-of okay and therefore continue to eat up talent for a much longer time than needed; cases where ambitious projects fail to "fail fast". It takes a lot of judgment ability and self-honesty to tell that it's a failure relative to what one could have worked on otherwise.

One example is Raising for Effective Giving, a poker fundraising project that I helped found and run. It showed a lot of promise in terms of $ raised per $ spent over the years it was operating, and actually raised $25m for EA charities. But it looks a lot less high-impact once you draw comparisons to GWWC and Longview, or once you account for the small market size of the poker industry, lack of scalability, the expected future funding inflows into EA, and compensation from top Earning To Give opportunities. $25 million is really not much compared to the billions others raised through billionaire fundraising and entrepreneurship.

I personally failed to admit to myself that the project was showing mediocre but not amazing results, and only my successor (Stefan) then discontinued the project, which in hindsight seems like the correct judgment call.

EA and the current funding situation

.

[This comment is no longer endorsed by its author]Reply
EA and the current funding situation

Regarding Harming quality of thought, my main worry  is a more subtle one:

It is not that people might end up with different priorities than they would otherwise have, but that they might end up with the same priorities but worse reasoning

I.e. before there was a lot funding, they thought "Oh I should really think about what to work on. After thinking about it really careful, X seems most important". 

Now they think "Oh X seems important and also what I will get funded for, so I'll look into that first. After looking into it, I agree with funders that this seems most important."

This is still for the same X, and their conclusions are still the same. But their reasoning about X has now become worse because they investigated important claims less thoroughly.

Doing good easier: how to have passive impact

I think the "passive impact" framing encourages us too much to start lots of things and delegate/automate them. I prefer "maximize (active or passive) impact (e.g. by building a massively scalable organization)". This includes the strategy "build a really excellent org and obsessively keep working on it until it's amazing", which doesn't pattern-match "passive impact" and seems superior to me because a lot of the impact is often unlocked in the tail-end scenarios.

You might argue that excellent orgs often rely on a great deal of delegation and automation, and I would wholeheartedly agree with that. But I think the "passive impact" framing tends to encourage a thinking pattern that's less like "building massively scalable systems" and more like "quickly automate something", and I think that's worse.

The Wicked Problem Experience

Here's a provocative take on your experience that I don't really endorse, but I'd be interested in hearing your reaction to:

Finding unusually cost-effective global health charities isn't actually a wicked problem. You just look into the existing literature on global health prioritization, apply a bunch of quick heuristics to find the top interventions, find charities implementing them, and then see which ones will get more done with more funding. In fact, Giving What We Can independently started recommending the Against Malaria Foundation through a process that was much faster than the above. Peter Singer also came up with donation recommendations that seem not much worse than current GiveWell top recommendations based on fairly limited research.

In response to such a comment, I might say that GiveWell actually had much more reason to think AMF was indeed one of the most cost-effective charities than GWWC, that Peter Singer's recommendations were good but substantially less cost-effective (and that improvement is clearly worth it), and that the above illustration of the wicked problem experience is useful because it applies more strongly in other areas (e.g. AI forecasting). But I'm curious about your response.

FTX/CEA - show us your numbers!

Personally I think going for something like 50k doesn't make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others.

Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they're aiming for highly engaged people, and don't pay as much attention to how many less engaged people they're causing.

FTX/CEA - show us your numbers!

Yeah I fully agree with this; that's partly why I wrote "gestures". Probably should have flagged it more explicitly from the beginning.

EA Houses: Live or Stay with EAs Around The World

Are you curating the spreadsheet in any way? In particular, do you have a mechanism for removing entries submitted by people who have in the past made unwanted sexual advances or otherwise a track record of not respecting community members' boundaries?

Load More