Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
28

Sorted by New
3
calebp
· · 1m read

Comments
339

Topic contributions
6

Answer by calebp19
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

I don’t see a lot of technical safety people engaging in advocacy, either? It’s not like they tried advocacy first and then decided on technical safety. Maybe you should question their epistemology.

My impression is that so far most of the impactful "public advocacy" work has been done by "technical safety" people. Some notable examples include Yoshua Bengio, Dan Hendryks, Ian Hogarth, and Geoffrey Hinton.

If the survey had framed the same questions in multiple ways for higher reliability or had some kind of consistency checking* I would trust that respondents endorsed their numbers more. Not necessarily saying this is a good trade to make as it would increase the length of the survey.

*e.g., asking separately in different parts of the survey about the impact of: • Animal welfare $ / Global health $ • Global health $ / AI $ • Animal welfare $ / AI $

…and then checking if the responses are consistent across all sections.

One idea I've had to try and resolve this issue for donors is to have all private grants audited by a trusted animal welfare person who doesn't work on the fund (e.g. Lewis Bollard) and commit to publishing their comments in payout reports. I think they'd be able to say things like "I agree that the private grants should be kept private and on average they were about as cost-effective as the public grants".

I'll take <agree> <disagree> votes to indicate how compelling this would be to readers.

Firstly, I'm sorry that you feel inadequate compared to people on the EA Forum or at EAGs. I think EA is a pretty weird community and it's totally reasonable for people to not feel like it's for them and instead try and do an ambitious amount of good outside the community.

I think this is somewhat orthogonal to feelings of rejection or the broader point that you are making about the higher impact potential of larger communities but I've personally felt that whilst EA seems to "care more" about people who are particularly smart, hardworking, and altruistic, it does a good job of giving people from various backgrounds an opportunity to participate - even if it's differentially easier if you went to a top university.

For example, I think if someone with little or no reputation were to post a few top 10% of rethink priorities quality articles on important topics in fish welfare on the EA Forum they'd gain a lot of career capital and would almost overnight be on various organisation's radars as someone to consider hiring (or at least be competitive in various application processes). I think that story is probably more true for AI safety. Contrast this with hiring for various hedge funds and consultancies which can be really hard to break into if you didn't go to a small set of universities.

Thanks for the flag, we have had some turnover recently - will ask our dev to update the site!

The main difference in actions so far is that the ARM Fund has focussed on active grantmaking (e.g. in AI x information security fieldbuilding). In contrast, the LTFF has a more democratic and passive grantmaking focus. I also don't think that ARM Fund has reached product market fit yet, it's done a few things reasonably well but I don't think it has a scalable product (unless we decide to do a lot more active grantmaking but so far that has been more opportunistic).

This fund was spun out of the Long-Term Future Fund (LTFF), which makes grants aiming to reduce existential risk. Over the last five years, the LTFF has made hundreds of grants, specifically in AI risk mitigation, totalling over $20 million. Our team includes AI safety researchers, expert forecasters, policy researchers, and experienced grantmakers. We are advised by staff from frontier labs, AI safety nonprofits, leading think tanks, and others.

More recently ARM Fund has been doing active grantmaking in AIS areas, we'll likely write more about this soon. I expect the funds to become much more differentiated in staff in the next few months (though that's not a commitment). Longer term, I'd like them to be pretty separate entities but for now they share roughly the same staff.

If you perceive any sort of downside from it, you can always remove it again.

Aren't most of the downsides and upsides to norms hard to reverse (almost by definition)? Maybe you don't think the upside is in getting other people to also participate in using the signal - but my read of the OP thinks that this is mostly about creating norms.

Load more