Jonas Vollmer

I appreciate honest feedback: https://admonymous.co/vollmer

I'm the Executive Director at EA Funds, based in Oxford. You can best reach me at jonas.vollmer@centreforeffectivealtruism.org.

Previously, I was a co-founder and co-executive director at the London-based Center on Long-Term Risk, a research group and grantmaker focused on preventing s-risks from AI.

My background is in medicine (BMed) and economics (MSc) with a focus on public choice, health economics, and development economics. See my LinkedIn.

Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)

Comments

EA Funds is more flexible than you might think

Some quick thoughts:

  • EA seems constrained by specific types of talent and management capacity, and the longtermist and EA meta space has a hard time spending money usefully
  • In this environment, funders need to work proactively to create new opportunities (e.g., by getting new, high-value organizations off the ground that can absorb money and hire people)
  • Proactively creating such opportunities is typically referred to as "active grantmaking"
  • I think active grantmaking benefits a lot from resource pooling, specialization, and coordination, and less from diversification
  • Likewise, in an ecosystem that's overall saturated with funding, it's quite likely that net-harmful projects receive funding; preventing this requires coordination, and diversification can be bad in such a situation
  • Given the above, I think funder coordination and specialization will have large benefits, and think the costs of funder diversification will often outweigh the benefits
  • However, I think the optimum for the EA ecosystem might still be to have 3-5 large donors  instead of the status quo of 1-3 funders (depending on how you count them) 
  • I think small and medium donors will continue to play an important role by funding projects they have local/unique information about and funding them (instead of giving to EA Funds)

(Thanks to some advisors who recently helped me think about this.)

EA Funds is more flexible than you might think

Some further, less important thoughts:

  • Some people who repeatedly didn’t get funded have been fairly vocal about that fact, creating the impression that it’s really hard to get funded at least among some people. I feel unhappy about this because it seems to discourage people from launching new things. The reason why a proposal doesn’t get funded is usually quite specific to the project and person. They may get funded with a different project, or a different person may get funded for the same kind of project.
  • The absolute number of careful long-term EA funders is still low, but this has been growing over the past years. Extrapolating from that, it seems plausible that the funding situation in EA will likely be excellent in the years to come.
  • I believe (and others at EA Funds agree) that novel projects often shouldn’t receive long-term funding because it’s still unclear whether they will have much of an impact. At the same time, I am also keen to ensure that the staff of the project can feel financially secure. Based on this, my suggestion is that grantseekers should ask to pay themselves generous salaries for a short time frame, so they don’t have to worry about financial security, but will also strongly consider discontinuing their project early on if it doesn’t bear fruit. And we should encourage grantseekers to do so.
Apply to EA Funds now

I just published this article about some potential misconceptions that may help people decide whether to apply.

Missing Market: Sustainable African ETF

It seems plausibly good for the world if this existed. But for you personally, investing in AFK (or investing conventionally and donating the higher risk-adjusted returns) might be fine. See these articles:

 

If you wanted to make this happen, another path to success could also be to find investors with sufficient interest, then approach a white-label ETF provider and get them to set up a fund, see here.

Apply to EA Funds now

After looking more into this, we've decided not to evaluate applications for Community Building Grants during this grant application cycle. This is because we think CEA has a comparative advantage here due to their existing know-how, and they're still taking some exceptional or easy-to-evaluate grant applications, so some of the most valuable work will still be funded. It's currently unclear when CBG applications will reopen, but CEA is thinking carefully about this question and I'll be coordinating with them.

That said, we're interested in receiving applications from EA groups that aren't typical community-building activities – e.g., new experiments, international community-building, spin-offs of local groups, etc. If you're unsure whether your project qualifies, just send me a brief email.

I'm aware this isn't the news you and others may have been hoping for, so I personally want to contribute to resolving this gap in the funding ecosystem long-term.

Edit: Huh, some people downvoted. If you have concerns about this comment or decision, please leave a comment or send me a PM.

Why EA groups should not use “Effective Altruism” in their name.

Some further, less important points:

  • We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term.
  • Taking action on EA issues doesn't really require altruism. While I think it’s important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if they don’t strongly self-identify as altruists. Some may be mostly interested in contributing to the intellectual aspects without making large personal sacrifices.
  • There was a careful process where the name of CEA was determined. However, the adoption of the EA label for the entire community happened organically and wasn’t really a deliberate decision.
  • "Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
Why EA groups should not use “Effective Altruism” in their name.

Great points, I had been thinking along similar lines. I want to second the points about awkward translations, and that a lot of people don't really know what "altruism" means. 

Some additional thoughts:

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

  • Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me.
  • Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions.

"Effective altruism" sounds like a strong identity:

  • Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se.
  • Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals.
  • In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas.

Some thoughts on potential implications:

  • These concerns don't just affect EA groups. The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc., and the EA brand might plausibly be unattractive for some of these people. If that's true, the EA brand might act as a cap on EA's long-term growth potential, so we should perhaps aim to de-emphasize it. Or at least do some marketing research on whether this is indeed an issue.
  • EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.
  • I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.

Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.

Everyday Longtermism

It might be interesting to compare that to everyday environmentalism or everyday antispeciesism. EAs have already thought about these areas a fair bit and have said interesting things about in the past.

In both of these areas, the following seems to be the case:

  1. donating to effective nonprofits is probably the best way to help at this point, 
  2. some other actions look pretty good (avoiding unnecessary intercontinental flights and fuel-inefficient cars, eating a plant-based diet), 
  3. other actions are making a negligibly small difference per unit of cost (unplugging your phone charger when you're not using it, avoiding animal-based food additives), 
  4. there are some harder-to-quantify aspects that could be very good or not (activism, advocacy, etc.),
  5. there are some virtues that seem helpful for longer-term, lasting change (becoming more aware of how products you consume are made and what the moral cost is, learning to see animals as individuals with lives worth protecting).

EAs are already thinking a lot about optimizing #1 by default, so perhaps the project of "everyday longtermism" could be about exploring whether actions fall within #2 or #3 or #4 (and what to do about #4), and what the virtues corresponding to #5 might look like.

Interview with Tom Chivers: “AI is a plausible existential risk, but it feels as if I’m in Pascal’s mugging”

I think this post uses the term "Pascal's mugging" incorrectly, and I've seen this mistake frequently so I thought I'd leave a comment. 

Pascal's mugging refers to scenarios with tiny probabilities (less than 1 in a trillion or so) of vast utilities (potentially higher than the largest utopia/dystopia that could be achieved in the reachable universe), and presents a decision-theoretic problem. Some discussion in Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? and  Pascal's Muggle: Infinitesimal Priors and Strong Evidence. Quoting from the first of those pieces:

Yet it would also be naive to say things like “Long-termists are victims of Pascal’s Mugging.”

I think the correct term for the issue you're describing might be something like "cause robustness" or "conjunctive arguments" or similar.

Apply to EA Funds now

That's a great suggestion, thank you. It will take me a few days to figure this out, so I expect to reply in a week or so. (Edited Sat 27 Feb: Still need a bit longer, sorry.)

Load More