Joel Tan🔸

Founder @ CEARCH
1604 karmaJoined


I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.


CEARCH: Research Methodology & Results


Topic contributions

They're working on creating an option to make it easy for posters to add the diamond, but in the meantime you can DM the forum team (I did!) 

Hi Nicolaj,

Thanks for sharing! That's really interesting. Couple of thoughts:

(1) For us, CEARCH uses n=1 when modelling the value of income doublings, because we've tended to prioritize health interventions where the health benefits tend to swamp the economic benefits anyway (and we've tended to priortize health interventions because of the heuristic that the NCDs are a big and growing problem which policy can cheaply combat at scale, vs poverty which by the nature of economic growth is declining over time).

(2) The exception is when modelling the counterfactual value of government spending, which a successful policy advocacy intervention redirects, and has to be factored in, albeit at a discount to EA spending, and while taking into account country wealth (https://docs.google.com/spreadsheets/d/1io-4XboFR4BkrKXgfmZHQrlg8MA4Yo_WLZ7Hp6I9Av4/edit?gid=0#gid=0).

There, the modelling is more precise, and we use n=1.26 as a baseline estimate, per Layard, Mayraz and Nickell's review of a couple of SWB surveys (https://www.sciencedirect.com/science/article/abs/pii/S0047272708000248). Would be interested in hearing how your team arrived at n=1.87 - I presume this is a transformation of an initial n=1 based on your temporal discounts?


It's true that people with abhorrent views in one area might have interesting or valuable things to say in other areas - Richard Hanania, for example, has made insightful criticisms of the modern American right.

However, if you platform/include people with abhorrent views (e.g. "human biodiversity", the polite euphemism for the fundamentally racist view some racial groups have lower IQ than others - which is a view held by a number of Manifest speakers), you run into the following problem - that the bad chases out the good.

The net effect of inviting in people with abhorrent views is that it turns off most decent people, either because they morally object to associating with such abhorrent views, or because they just don't want the controversy. You end up with a community with an even smaller percentage of decent people and a higher proportion of bigots and cranks, which in turn turns off even more decent people, and so on and so forth. Scott Alexander himself says it best in his article on witches:

The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

At the end of the day, platforming anyone whatsoever will leave you only with people rejected by polite society, and being open to all ideas will leave you with only the crank ones.

Mathias can share more (assuming no confidentiality concerns) but talking to both him and others in the aid space - it's just brutally difficult, and politicians aren't interested

Generally, they have a combination of the following characteristics: (a) a direct understanding of what their own grantmaking organization is doing and why, (b) deep knowledge of the object-level issue (e.g. what GHD/animal welfare/longtermist projects to fund, and (c) extensive knowledge of the overall meta landscape (e.g. in terms of what other important people/organizations there are, the background history of EA funding up to a decade in the past, etc).

Hi Linch,

Thanks for engaging. I appreciate that we can have a fairly object-level disagreement over this issue; it's not personal, one way or another.

Meta point to start: We do not make any of these criticisms of EA Funds lightly, and when we do, it's against our own interests, because we ourselves are potentially dependent on EAIF for future funding.

To address the points brought up, generally in the order that you raised them:

(1) On the fundamental matter of publication. I would like to flag out that, from checking the email chain plus our own conversation notes (both verbatim and cleaned-up), there was no request that this not be publicized.

For all our interviews, whenever someone flagged out that X data or Y document or indeed the conversation in general shouldn't be publicized, we respected this and did not do so. In the public version of the report, this is most evident in our spreadsheet where a whole bunch of grant details have been redacted; but more generally, anyone with the "true" version of the report shared with the MCF leadership will also be able to spot differences. We also redacted all qualitative feedback from the community survey, and by default anonymized all expert interviewees who gave criticisms of large grantmakers, to protect them from backlash. 

I would also note that we generally attributed views to, and discussed, "EA Leadership" in the abstract, both because we didn't want to make this a personal criticism, and also because it afforded a degree of anonymity.

At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted - I agree it's probably a difference in norms. In a professional context, I'm generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share (e.g. I was talking to a UK-based donor yesterday, and I shared a bunch of my grantmaking views. If he wrote a post on the forum summarizing the conversations he had with a bunch of research organizations and donor advisory orgs, including our own, I wouldn't object). More generally, I think if we have some degree of public influence (including by the money we control) it would be difficult from the perspective of public accountability if "insiders" such as ourselves were unwilling to share with the public what we think or know.

(2) For the issue of CEA stepping in: In our previous conversation, you relayed that you asked a senior person at CEA and they in turn said that "they’re aware of some things that might make the statement technically true but misleading, and they are not aware of anything that would make the statement non-misleading, although this isn’t authoritative since many thing happened at CEA". For the record, I'm happy to remove this since the help/assistance, if any, doesn't seem too material one way or another.

(3) For whether it's fair to characterize EAIF's grant timelines as unreasonably long. As previously discussed, I think the relevant metric is EAIF's own declared timetable ("The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks."). This is because organizations and individuals make plans based on when they expect to get an answer - when to begin applying; whether to start or stop projects; whether to go find another job; whether to hire or fire; whether to reach out to another grantmaker who isn't going to support you until and unless you have already exhausted the primary avenues of potential funding.

(4) The issue of the major donor we relayed was frustrated/turned off. You flag out that you're keeping tabs on all the major donors, and so don't think the person in question is major. While I agree that it's somewhat subjective - it's also true that this is a HNWI who, beyond their own giving, is also sitting on the legal or advisory boards many other significant grantmakers and philanthropic outfits. Also, knowledgeable EAs in the space have generally characterized this person as an important meta funder to me (in the context of my own organization then thinking of fundraising, and being advised as to whom to approach). So even if they aren't major in the sense that OP (or EA Funds are), they could reasonably be considered fairly significant. In any case, the discussion is backwards, I think - I agree that they don't play as significant a role in the community right now (and so you assessment of them as non-major is reasonable), but that would be because of the frustration they have had with EA Funds (and, to be fair, the EA community in general, I understand). So perhaps it's best to understand this as potentially vs currently major.

(5) On whether it's fair to characterize EA Funds leadership as being strongly dismissive of cause prioritization. We agree that grants have been made to RP; so the question is cause prioritization outside OP and OP-funded RP. Our assessment of EA Fund's general scepticism of prioritization was based, among other things, on what we reported in the previous section "They believe cause prioritization is an area that is talent constrained, and there aren't a lot of people they feel great giving to, and it's not clear what their natural pay would be. They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization. In general, they don't think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work." In your comment, you dispute that the bolded part in particular is true, saying "AFAIK nobody at EA Funds believes this."

We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.

TLDR: Fundamentally, I stand by the accuracy of our conversation notes.

(a) Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below)

(b) From my own personal perspective - I used to work in government and in consulting (for governments). It was standard practice to have notes of meeting, as made by junior staffers and then submitted to more senior staff for edits and approval. Nothing resembling this happened to either me or anyone else (i.e. just total misunderstanding tantamount to fabrication, in saying that that XYZ was said when nothing of the sort took place).

(c) My word does not need to be taken for this. We interviewed other people, and I'm beginning to reach out to them again to check that our notes matched what they said. One has already responded (the person we labelled Expert 5 on Page 34 of the report); they said "This is all broadly correct" but requested we made some minor edits to the following paragraphs (changes indicated by bold and strikethrough)

  • Expert 5: Reports both substantive and communications-related concerns about EA Funds leadership.

    For the latter, the expert reports both himself and others finding communications with EA Funds leadership difficult and the conversations confusing.

    For the substantive concerns – beyond the long wait times EAIF imposes on grantees, the expert was primarily worried that EA Funds leadership has been unreceptive to new ideas and that they are unjustifiably confident that EA Funds is fundamentally correct in its grantmaking decisions. In particular, it appears to the expert that EA Funds leadership does not believe that additional sources of meta funding would be useful for non-EAIF grants [phrase added] – they believe that projects unfunded by EAIF do not deserve funding at all (rather than some projects perhaps not being the right fit for the EAIF, but potentially worth funding by other funders with different ethical worldviews, risk aversion or epistemics). Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination and this likely is one reason they ended up disengagement from further meta grantmaking coordination [replaced].

My even handed interpretation of this overall situation (trying to be generous to everyone) is that what was reported here ("In general, they don't think that other funders outside of OP need to do work on prioritization") was something the EA Funds interviewee said relatively casually (not necessarily a deep and abiding view, and so not something worth remembering) - perhaps indicative of scepticism of a lot of cause prioritization work but not literally thinking nothing outside OP/RP is worth funding. (We actually do agree with this scepticism, up to an extent).

(6) On whether our statement that “EA Funds leadership doesn't believe that there is more uncertainty now with EA Fund's funding compared to other points in time” is accurate. You say that this is clearly false. Again, I stand by the accuracy of our conversation notes. And in fact, I actually do personally and distinctively remember this particular exchange, because it stood out, as did the exchange that immediately followed, on whether OP's use of the fund-matching mechanism creates more uncertainty.

My generous interpretation of this situation is, again, some things may be said relatively casually, but may not be indicative of deep, abiding views.

(8) For the various semantic disagreements. Some of it we discussed above (e.g. the OP cause prioritization stuff); for the rest -

On whether this part is accurate: “​​Leadership is of the view that the current funding landscape isn't more difficult for community builders”. Again, we do hold that this was said, based on the transcripts. And again, to be even handed, I think your interpretation (b) is right - probably your team is thinking of the baseline as 2019, while we were thinking mainly of 2021-now.

On whether this part is accurate: “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they're reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” I don't think we disagree too much, if we agree that EA Fund's position is that coordination is only worthwhile if the counterpart is around for a bit. Otherwise, it's just some subjective disagreement on what what coordination is or what significant degrees of it amount to.

On this statement: "[EA funds believes] “so if EA groups struggle to raise money, it's simply because there are more compelling opportunities available instead.

In our discussion, I asked about the community building funding landscape being worse; the interviewee disagreed with this characterization, and started discussing how it's more that standards have risen (which we agree is a factor). The issue is that the other factor of objectively less funding being available was not brought up, even though it is, in our view, the dominant factor (and if you asked community builders this will be all they talk about). I think our disagreement here is partly subjective - over what a bad funding landscape is, and also the right degree of emphasis to put on rising standards vs less funding.

(9) EA Funds not posting reports or having public metrics of successes. Per our internal back-and-forth, we've clarified that we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved). Like, speaking for my own organization, I don't think the people funding our regranting budgets would be happy if I reported the mere spending as evidence of success.

(OVERALL) For what it's worth, I'm happy to agree to disagree, and call it a day. Both your team and mine are busy with our actual work of research/grantmaking/etc, and I'm not sure if further back and forth will be particularly productive, or a good use of my time or yours.

On (2). If you go to 80k's front page (https://80000hours.org/), there is no mention that the organizational's focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in "Start Here", you have to read 22 paragraphs down to understand 80k's explicit prioritization of x-risk over other causes. In the "Career Guide", it's about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to "pressing problems" and links back to the research page. And on the research page itself, the issue is that it doesn't give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion's share of organizational resources.

I'm not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising - without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged - like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn't realize how AGI-focused 80k was.

People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it's correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: "We incubate AI x-risk nonprofits ​by connecting founders with ideas, funding, and mentorship"), the casual reader of the website doesn't understand that 80k basically works on AGI.

Hi Gisele,

At CEARCH (https://exploratory-altruism.org/), we generally agree that combating non-communicable chronic diseases is highly cost-effective (e.g. salt reduction policies to combat high blood pressure, sugar drinks taxes to combat obesity, also things like trans fat bans or alcohol taxes).

As part of our grantmaking work, we're on the lookout for charities/NGOs working on these issues (or more generally on advocating for health policy, and helping governments implement such policies). If you are aware of any organizations in this space, do let us know!

Hi Jamie,

For (1) I'm agree with 80k's approach in theory - it's just that cost-effectiveness is likely heavily driven by the cause-level impact adjustment - so you'll want to model that in a lot of detail.

For (2), I think just declaring up front what you think is the most impactful cause(s) and what you're focusing on is pretty valuable? And I suppose when people do apply/email, it's worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.

Hope my two cents is somewhat useful!

I think you're right in pointing out the limitations of the toy model, and I strongly agree that the trade-off is not as stark as it seems - it's more realistic that we model it aa a delay from applying to EA jobs before settling for a non EA job (and that this wont be like a year or anything)

However, I do worry that the focus on direct work means people generally neglect donations as a path to impact and so the practical impact of deciding to go for an EA career is that people decide not to give. An unpleasant surprise I got from talking to HIP and others in the space is that the majority of EAs probably don't actually give. Maybe it's the EA boomer in me speaking, but it's a fairly different culture compared to 10+ years ago where being EA meant you bought into the drowning child arguments and gave 10% or more to whatever cause you thought most important

Load more