Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

It seemed at EAG that discussions focussed on two continuums:

Neartermist <---> Longtermist

Frugal spending <---> Ambitious spending

(The labels for the second one are debatable but I'm casually aiming for ones that won't offend either camp.)

Finding common ground on the first has been an ongoing project for years.

The second is much more recent, and it seems like more transparency could really help to bring people on opposite sides closer together.

Accordingly: could FTX and CEA please publish the Back Of The Envelope Calculations (BOTECs) behind their recent grants and community building spending?

(Or, if there is no BOTEC and it's more "this seems plausibly good and we have enough money to throw spaghetti at the wall", please say that clearly and publicly.)

This would help in several ways:

  1. for sceptics of some recent spending, it would illuminate the thinking behind it. It would also let the community kick the tires on the assumptions and see how plausible they are. This could change the minds of some sceptics; and potentially improve the BOTECs/thinking
  2. it should help combat misinformation. I heard several people misrepresent (in good faith) some grants, because there is not a clear public explanation of the grants' theory of change and expected value. A shared set of facts would be useful and improve debate
  3. it will set the stage for future evaluation of whether or not this thinking was accurate. Unless we make predictions about spending now, it'll be hard to see if we were well calibrated in our predictions later

Objection: this is time consuming, and this time is better spent making more grants/doing something else

Reply: possibly true, and maybe you could have a threshold below which you don't do this, but these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA. The dominant theme of my discussions at EAG was some combination of anxiety and scorn about recent spending. If this is too time-consuming for the current FTX advisers, hire some staff (Open Phil has ~50 for a similar grant pot and believes it'll expand to ~100).

Objection: why drag CEA into this?

[EDIT: I missed an update on this last week and now the stakes seem much lower - but thanks to Jessica and Max for engaging with this productively anyway: https://forum.effectivealtruism.org/posts/xTWhXX9HJfKmvpQZi/cea-is-discontinuing-its-focus-university-programming]

Reply: anecdata, and I could be persuaded that this was a mistake. Several students, all of whom asked not be named because of the risk of repercussions, expressed something between anxiety and scorn about the money their own student groups had been sent. One said they told CEA they didn't need any money and were sent $5k anyway and told to spend it on dinners. (Someone from CEA please jump in if this is just false, or extremely unlikely, or similar - I do realise I'm publishing anonymous hearsay.) It'd be good to know how CEA is thinking about spending wisely as they are very rapidly increasing their spending on EA Groups (potentially to ~$50m/year).

Sidenote: I think we have massively taken Open Phil for granted, who are exceptionally transparent and thoughtful about their grant process. Well done them.

Comments110
Sorted by Click to highlight new comments since: Today at 6:46 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi Jack,

Just a quick response on the CEA’s groups team end.

We are processing many small grants and other forms of support for CB  and we do not have the capacity to publish BOTECs on all of them. 

However, I can give some brief heuristics that we use in the decision-making.

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world. We would love for these students to instead focus on solving the world’s biggest and most important problems.

Based on the current amount available in EA, its projected growth, and the value of getting people working in EA careers, we currently think that spending at least as much as McKinsey does on recruiting pencils out in expected value terms over the course of a student’s career. There are other factors to consider here (i.e. double-counting some expenses) that mean we actually spend significantly less than this. However, as Thomas said - even small chances that dinners could have an effect on career changes make them seem like effective uses of money. (We do have a fair a... (read more)

Hi Jessica, 

Thanks for outlining your reasoning here, and I'm really excited about the progress EA groups are making around the world. 

I could easily be missing something here, but why are we comparing the value of CEA's community building grants to the value of Mckinsey etc? 

Isn't the relevant comparison CEA's community building grants vs other EA spending, for example GiveWell's marginally funded programs (around 5x the cost-effectiveness of cash transfers)? 

If CEA is getting funding from non-EA sources, however, this query would be irrelevant. 

Looking forward to hearing your thoughts :) 

I'm obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective. 

If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to  with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students. 

The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect. 

3
Lucas Lewit-Mendes
2y
Thanks Nathan, that would make a lot of sense, and motivates the conversation about whether CEA can realisticly attract as many people through advertising as Goldman etc.  I guess the question is then whether:  a) Goldman's activities are actually effective at attracting students; and b) This is a relevant baseline prior for the types of activities that local EA groups undertake with CEA's funding (e.g. dinners for EA scholars students)

Just a quick response on the CEA’s groups team end.

...

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world.

I'm surprised to see CEA making such a strong claim. I think we should have strong priors against this stance, and I don't think I've seen CEA publish conclusive evidence in the opposite direction.

Firstly, note that these three companies come from very different sectors of the economy and do very different things. 

Secondly, even if you assign high credence to the problems with these firms, it seems like there is a fair bit of uncertainty in each case, and you are proposing a quite harsh upper bound - 'probably at best neutral'.

Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It's always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers... but I think our prior should be a... (read more)

nonn
2y60
0
0

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
  • Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably

Agree we should usually avoid saying poorly-justified things when it's not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.

Sorry, I was trying to get a quick response to this post and I made a stronger claim than I intended. I was trying to say that I think that EA careers are doing much more good than the ones mentioned on average and so spending money is a good bet here. I wasn’t intending to make a definitive judgment about the overall social impact of those other careers, though I know my wording suggests that. I also generally want to note that this element was a personal claim and not necessarily a CEA endorsed one. 

2
Charles He
2y
This was a great comment and thoughtful reply and the top comment was great too. Looking at the other threads generated from the top comment, it looks like tiny turns of phrase in that top comment, produced (unreasonably) large amounts of discussion. I think we all learned a valuable lesson about the importance of clarity and precision when commenting on the EA forum.
7
Jeff Kaufman
2y
FYI I would have upvoted this if not for the final paragraph

Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It's always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers... but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.

I consider this to be a pretty weak argument, so it doesn't contribute much to my priors, which although weak (and so the particulars of a company matter much more), are probably centered near neutral on net welfare effects (in the short to medium term). I think a large share of goods people buy and things they do are harmful to themselves or others before even considering the loss of income/time as a result, or worse for them than the things they compete with. It's enough that I wouldn't have a prior strongly in favour of what profitable companies are doing being good for us. Here are reasons pushing towards neutral or negative impacts:

  1. A lot of goods are mostly for sig
... (read more)
4
Guy Raveh
2y
There are also a lot of externalities that act at least equally on humans, like carbon emissions, promotion of ethnic violence, or erosion of privacy. Those are all examples off the top of my head for Facebook specifically. I upvoted Larks' comment, but like you I think this particular argument, "people buy from these firms", is weak.

Ok. Lark’s response seems correct.

But surely, the spirit of the original comment is correct too.

No matter which worldview you have, the value of a top leader moving into EA is overwhelmingly larger than the the social value of the same leader “rowing” in these companies.

Also, at the risk of getting into politics (and really your standard internet argument) gesturing at “free market” is really complicated. You don’t need to take the view of Matt Stoller or something to notice that the benefits of these companies can be provided by other actors. The success of these companies and their resources that allow recruitment with 7 figure campus centres probably has a root source different than pure social value.

The implication that this statement requires CEA to have a strong model of these companies seems unfair. Several senior EAs, who we won’t consider activists or ideological, have deep experiences in these or similar companies. They have opinions that are consistent with the parent comment’s statement. (Being too explicit here has downsides.)

I think the main crux here is that even if Jessica/CEA agrees that the sign of the impact is positive, it still falls in the neutral bracket because on the CEA worldview the impact is roughly negligible relative to the programs that they are excited about. 

If you disagree with this maybe you agree with the weaker claim of the impact being comparatively negligible weighted by the resources these companies consume? (there's some kind of nuance to 'consuming resources' in profitable companies, but I guess this is more gesturing at a leaving value on the table framing as opposed to just is the organisation locally net negative or positive.

Do you think people are better off overall than otherwise because of Facebook (and social media generally)? You may have made important connections on Facebook, but many people probably invest less in each connection and have shallower relationships because of social media, and my guess is that mental health is generally worse because of social media (I think there was an RCT on getting people to quit social media, and I wouldn't be surprised if there were multiple studies. I don't have them offhand). I'd guess social media is basically addictive for a lot of people, so people often aren't making well-informed decisions about how much to use, and it's easy for it to be net negative despite widespread use. People joining social media pressures others to join, too, making it more costly to not be on it, so FB creates a problem (induces fear of missing out) and offers a solution to it. Cancel culture, bubbles/echo chambers, the spread of misinformation, and polarization may also be aggravated by social media.

That being said, maybe FB was really important for the growth of the EA community. I mostly got into EA through FB initially, although it's not where I was first exposed to EA. If... (read more)

8
Linch
2y
I don't think this is persuasive. I think most actions people take either increase or decrease x-risk, and you should start with a ~50% prior for which side of neutrality a specific action is on (though not clearly true; see discussion here). I agree there's some commonsensical notions that economic growth is good, including for the LT future, but I personally find arguments in the opposite direction to be slightly stronger. Your own comment to an earlier post is one interesting item on the list of arguments I'd muster in that direction.
2
Larks
2y
Ahh, interesting argument! I wasn't thinking about the argument that these firms might (e.g.) slightly accelerate economic growth, which might then cause an increase in x-risk (if safety is not equivalently accelerated). In general I feel sufficiently unclear about such considerations - like maybe literally 50:50 equipoise is a reasonable prior - that I am loath to let them overwhelm a more concrete short-term impact story in our cost-benefit analysis, in the absence of a clear causal link to a long run impact in the opposite direction, as you suggest in the article. In this case I think my argument still goes through, because the claim I'm objecting to is so strong - that there is in some sense a >50% probability that every reasonable scenario has all three firms being negative.

Thanks Jessica, this is helpful, and I really appreciate the speed at which you replied.

A couple of things that might be quick to answer and also helpful:

  • is there an expected value of someone working in an EA career that CEA uses? The rationale above suggests something like 'we want to spend as much as top tier employers' but presumably this relates to an expected value of attracting top talent that would otherwise work at those firms?
  • I agree that it's not feasible to produce, let alone publish, a BOTEC on every payout. However, is there a bar that you're aiming to exceed for the manager of a group to agree to a spending request? Or a threshold where you'd want more consideration about granting funding? I'm sure there are examples of things you wouldn't fund, or would see as very expensive and would have some rule-of-thumb for agreeing to (off-site residential retreats might be one). Or is it more 'this seems within the range of things that might help, and we haven't spent >$1m on this school yet?'
  • is there any counterfactual discounting? Obviously a lot of very talented people work in EA and/or have left jobs at the employers you mention to work in EA. So what's the thinking on how this spending will improve the talent in EA?
  • Some non-CEA people have made estimates that we sometimes refer to. I'm not sure I have permission to share them, but they suggest significant value. Based in part on these figures, I think that the value of a counterfactual high-performing EA is in the tens of millions of dollars.
    • I think we should also expect higher willingness to pay than private firms because of the general money/people balance in the community, and because we care about their whole career (whereas BCG  will in expectation only get about 4 years of their career (number made up)).
  • I'll let Jessica answer with more specifics if she wants to, but we're currently spending much less than $1m/school.
  • Yes, it's obviously important that figures are counterfactually discounted. But groups seem have historically been counterfactually important to people (see OP's survey), and we think it's likely that they will be in the future too. Given the high value of additional top people, I think spending like this still looks pretty good.

Overall, CEA is planning to spend ~$1.5mil on uni group support in 2022 across ~75 campuses, which is a  lot less than $1mil/campus. :) 

Fwiw, I personally would be excited about CEA spending much more on this at their current level of certainty if there were ways to mitigate optics, community health, and tail risk issues.

2
Jack Lewars
2y
Indeed :-) I had understood from this post (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/) that this was the destination, though, so the current rate of spending would be less relevant than having good heuristics before we get to that scale. I see from Max below, though, that Open Phil is assuming a lot of this spending, so sorry for throwing a grenade at CEA if you're not actually going to be behind a really 'move the needle' amount of campus spending.

Just as a casual observation, I would much rather hire someone who had done a couple of years at McKinsey than someone coming straight out of undergrad with no work experience. So I'm not sure that diverting talented EAs from McKinsey (or similar) is necessarily best in the long run for expected impact. No EA organization can compete with the ability of McK to train up a new hire with a wide array of generally useful skills in a short amount of time. 

9
Nathan_Barnard
2y
I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they're at McKinsey. I think it's unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it's pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.  

I'm not sure how clear it is that it's much better for people to hear about EA at university, especially given there is a lot more outreach and onboarding at the university level than for professionals.

8
Guy Raveh
2y
Hi, thanks for your comment. While it's reasonable not to be able to provide an impact estimate for every specific small grant, I think there are some other things that could increase transparency and accountability, for example: * Publishing your general reasoning and heuristics explicitly on the CEA website. * Publishing a list of grants, updated with some frequency. * Giving some statistics on which sums went to what type of activities - again, updated once in a while.
5
MaxRa
2y
That's really interesting to me because I'm currently thinking about potential recruitment efforts at CS departments for AI safety roles. I couldn't immediately find a source for the numbers you mention, do you remember where you got them from?
9
Andrea_Miotti
2y
I also couldn't find much information on campus recruitment expenses for top firms. However, according to the US National Association of Colleges and Employers (NACE), in 2018 average cost-per-hire from US universities was $6,110.  FAANG and other top tier employers are likely to spend much more than the average.
8
Charles He
2y
For each of the companies, if you look at publicly available websites for the campus recruiting centre for one of the HYPS schools for these companies, and just look at the roster of public facing “ambassadors”, who have significant skills and earning counterfactual (so fully burdened cost may be over 200K per head) it’s clear it’s a 7 figure budget for them once you include operations, physical offices, management and other oversight (which won’t appear on the PL per se). 1 mil is the low end. I can’t immediately pull up a link here as I am on mobile.
[comment deleted]2y12
0
0

Good to see a post that loosely captures my own experience of EAG London and comes up with a concrete idea for something to do about the problem (if a little emotionally presented).

I don't have a strong view on the ideal level of transparency/communication here, but something I want to highlight is: Moving too slowly and cautiously is also a failure mode

In other words, I want to emphasise how important "this is time consuming, and this time is better spent making more grants/doing something else" can be. Moving fast and breaking things tends to lead to much more obvious, salient problems and so generally attracts a lot more criticism. On the other hand, "Ideally, they should have deployed faster" is not a headline. But if you're as consequentialist as the typical EA is, you should be ~equally worried about not spending money fast enough. Sometimes to help make this failure mode more salient, I imagine a group of chickens in a factory farm just sitting around in agony waiting for us all to get our act together (not the most relevant example in this case, but the idea is try to counteract the salience bias associated with the problems around moving fast). Maybe the best way fo... (read more)

Thanks so much for this comment. I find it incredibly hard not to be unwarrantedly risk averse. It feels really tempting to focus on avoiding doing any harm, rather than actually helping people as much as I can. This is such an eloquent articulation of the urgency we face, and why we need to keep pushing ourselves to move faster. 

I think this is going to be useful for me to read periodically in the future - I'm going to bookmark it for myself.

A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.

(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)

8
Jack Lewars
2y
I like this. I'm not sure I agree with you that I find it equally worrying as moving so fast that we break too many things, but it's a good point to raise. On a practical level, I partly wrote this because FTX is likely to have a lull after their first grant round where they could invest in transparency. I also think a concern is what seems to be such an enormous double standard. The argument above could easily be used to justify spending aggressively in global health or animal welfare (where, notably, we have already done a serious, serious amount of research and found amazing donation options; and, as you point out, the need is acute and immediate). Instead, it seems like it might be 'don't spend money on anything below 5x GiveDirectly' in one area, and the spaghetti-wall approach in another. Out of interest, did you read the post as emotional? I was aiming for brevity and directness but didn't/don't feel emotional about it. Kind of the opposite, actually - I feel like this could help to make us more factually aligned and less driven by emotional reactions to things that might seem like 'boondoggles'.

Yeah personally speaking, I don't have very developed views on when to go with Spaghetti-wall vs RCT, so feel free to ignore the following which is more of a personal story. I'd guess there's a bunch of 'Giving Now vs Giving Later' content lying around that's much more relevant.

I think I used to be a lot more RCT because:

  1. I was first motivated to take cost-effectiveness research seriously after hearing the Giving What We Can framing of "this data already exists, it's just that it's aimed at the health departments of LMICs rather than philanthropists" - that's some mad low-hanging fruit right there (OTOH I seem to remember a bunch of friends wrestling with whether to fund Animal Charity Evaluators or ACE's current best guesses - was existing cost-effectiveness research enough to go on yet?)
  2. I was basically a student trying to change the world with a bunch of other students - surely the grown-ups mostly know what they're doing and I should only expect to have better heuristics if there's a ton of evidence behind them
  3. My personality is very risk-averse

Over time, however:

  1. I became more longtermist and there's no GiveWell for longtermism
  2. We grew up, and basically the more I saw of the rest o
... (read more)

| Out of interest, did you read the post as emotional? I was aiming for brevity and directness

Ah, that might be it. I was reading the demanding/requesting tone ("show us your numbers!", "could FTX and CEA please publish" and  "If this is too time-consuming...hire some staff" vs "Here's an idea/proposal") as emotional, but I can see how you were just going for brevity/directness, which I generally endorse (and have empathy for emotional FWIW, but generally don't feel like I should endorse as such).

It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago. 

By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser? Possibly some of this already exists either privately or publicly and the relevant people know where to look (I haven't looked hard, sorry!). E.g. I remember GWWC putting together a fu... (read more)

I'd personally be pretty excited to see well-run analyses of this type, and would be excited for you or anyone who upvoted this to go for it. I think the reason why it hasn't happened is simply that it's always vastly easier to say that other people should do something than to actually do it yourself.

I completely agree that it is far easier to suggest an analysis than to execute one! I personally won't have the capacity to do this in the next 12-18 months, but would be happy to give feedback on a proposal and/or the research as it develops if someone else is willing and able to take up the mantle. 

I do think that this analysis is more likely to be done (and in a high quality way) if it was either done by, commissioned by, or executed with significant buy-in from CEA and other key stakeholders involved in community building and running local groups. This is partly a case of helping source data etc, but also gives important incentives for someone to do this research. If I had lots of free time over the next 6 months, I would only take this on if I was fairly confident that the people in charge of making decisions would value this research. One model would be for someone to write up a short proposal for the analysis and take it to the decision makers; another would be for the decision-makers to commission it (my guess is that this demand-driven approach is more likely to result in a well-funded, high quality study). 

To be clear, I massively appreciate the work ... (read more)

4
rossaokod
2y
P.S. I've also just seen Joan's write-up of the Focus University groups in the comments below, which suggests that there is already some decent self-evaluation, experimentation and feedback loops happening as part of these programmes' designs. So it is very possible that there is a good amount of this going on that I (as a very casual observer) am just not aware of!
2
IanDavidMoss
2y
Agreed! Note, however, that in the case of the FTX grants it will be pretty hard to do this analysis oneself without access to at the very least the list of funded projects, if not the full applications.

I also agree this would be extremely valuable. 

I think we would have had the capacity to do difference-in-difference analyses (or even simpler analyses of pre-post differences in groups with or without community building grants, full-time organisers etc.) if the outcome measures tracked in the EA Groups Survey were not changed across iterations and, especially, if we had run the EA Groups Survey more frequently (data has only been collected 3 times since 2017 and was not collected before we ran the first such survey in that year).

As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.

5
Jack Lewars
2y
This would be great. It also closely aligns with what EA expects before and after giving large funding in most cause areas.

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.

To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.

Candidly, I'm a bit dismayed that the top voted comment on this post is about clickbait.

6
Ben Pace
2y
Well, you don’t have to be any more, because now it’s Jessica McCurdy’s reply.
3
Jack Lewars
2y
Indeed - and to be clear, I wasn't trying to suggest that you shouldn't have made the comment - just that it's very secondary to the substance of the post, and so I was hoping the meat of the discussion would provoke the most engagement.
2
Ben Pace
2y
Yeah, pretty reasonable.
2
Jeff Kaufman
2y
Voting is biased toward comments that are easy to evaluate as correct/helpful/positive/valuable. With that in mind, I don't especially find this individual instance dismaying?


If this is too time-consuming for the current FTX advisers, hire some staff 

 

Hiring is an extremely labour and time intensive process, especially if the position you're hiring for requires great judgement. I think responding to a concern about whether something is a good use of staff time with 'just hire more staff' is pretty poor form, and given the context of the rest of the post it wouldn't be unreasonable to respond to it with 'do you want to post a BOTEC comparing the cost of those extra hires you think we should make to the harms you're claiming?'

The top-voted suggestion in FTX's call for megaproject ideas was to evaluate the impacts of FTX's own (and other EA) grantmaking. It's hard to conduct such an evaluation without, at some point, doing the kind of analysis Jack is calling for. I don't have  a strong opinion about whether it's better for FTX to hire in-house staff to do this analysis or have it be conducted externally (I think either is defensible), but either way, there's a strong demonstrated demand for it and it's hard to see how it happens without EA dollars being deployed to make it possible. So I don't think it's unreasonable at all for Jack to make this suggestion, even if it could have been worded a bit more politely.

That's right, and this was very casually phrased, so thanks for pulling me up on it. A better way of saying this would be: "if you're going to distribute billions of dollars in funding, in a way that is unusually capable of being harmful, but don't have the time to explain the reasoning behind that distribution, it's reasonable to ask you to hire people to do this for you (and hiring is almost certainly necessary for lots of other practical reasons)."

5
freedomandutility
2y
I agree with you that it’s important to account for hiring being very expensive. My view on more transparency is that its main benefit (which I don’t think OP mentions) is as a long-term safeguard to reduce poor but well intentioned reasoning, mistakes and nepotism around grant processes, and is likely to be worth hiring costs even if we don’t expect to identify ongoing harms. In other words, I think the stronger case for EA grantmakers being more transparent is the potential for transparency to reduce future harms, rather than its potential to reveal possible ongoing harms.
2
Holly Morgan
2y
Relevant comment from Sam Bankman-Fried in his recent 80,000 Hours podcast episode: "In terms of staffing, we try and run relatively lean. I think often people will try to hire their way out of a problem, and it doesn’t work as well as they’re hoping. I’m definitely nervous about that." (https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/#ftx-foundation-002022)

One generic back-of-the-envelope calculation from me:

Assume that when you try to do EA outreach, you get the following funnel:

  • ~10% (90% CI[1] 3%-30%) of people you reach out to will be open to being influenced by EA

  • ~10% (90% CI 5%-20%) of people who are reached and are open to being influenced by EA will actually take the action of learning more about EA

  • ~20% (90% CI 5%-40%) of people who learn more about EA actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)

Thus we expect outreach to a particular person to produce ~0.002 EAs on average.

Now assume an EA has the same expected impact as a typical GWWC member, and assume a typical GWWC member donates ~$24K/yr for ~6 years, making the total value of an EA worth ~$126,000 in donations, discounting at 4%. I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

Note that these numbers are pretty much made up[2] and each number ought to be refined with further research - something I'm working on and others should too. Also keep in mind that obviously these numbers will vary a lot based on the specific type of outreach being considered and so should be modifie... (read more)

But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. 

I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.” 

The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $ does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs. 

To put it more explicitly: suppose you value 1 EA  with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is -9 EAs per recruit, and you a... (read more)

I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs - that is, we should maximize return on investment.

**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We'd also ideally set ourselves to learn-by-doing when engaging in this outreach work.

4
Jack Lewars
2y
Absolutely. And so the questions are: * have we defined that ROI threshold? * what is it? * are we building ways to learn by doing into these programmes? The discussions on post suggest that it's at least plausible that the answers are 'no', 'anything that seems plausibly good' and 'no', which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.
4
Peter Wildeford
2y
This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF. I'd really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I'll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we've been hiring a lot more).

I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 4999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That's a factor of 30x, so a framing like "likely vastly more valuable" would seem more appropriate to me.

One reason to be lower than this per recruited EA is that you might think that the people who need to be recruited are systematically less valuable on average than the people who don't need to be. Possibly not a huge adjustment in any case, but worth considering. 

3
Jonas V
2y
Yeah I fully agree with this; that's partly why I wrote "gestures". Probably should have flagged it more explicitly from the beginning.
3
Linch
2y
Should be 4999
2
Jeff Kaufman
2y
I know this isn't your main point, but that's ~1/10 what I would have guessed. 5k is only 3x the people who attended EAG London this year.
-3
Jonas V
2y
Personally I think going for something like 50k doesn't make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others. Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they're aiming for highly engaged people, and don't pay as much attention to how many less engaged people they're causing.
3
Jeff Kaufman
2y
Peter was using a bar of "actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)". GWWC is 8k on its own, though there's probably been substantial attrition. But yes, because we expect impact to be power-lawish if you order all plausible EAs by impact there will probably not be any especially compelling places to draw a line.

My guess is this would reduce grant output a lot relative to how much I think anyone would learn (maybe it would grantmaking in half?) so personally I'd rather see them just push ahead and make a lot of grants then review or write about just a handful of them from time to time.

I also wish all the EA Funds and Open Phil would do this/make their numbers more accessible.

By the way, we are not planning to spend $50m on groups outreach in the near future. Our groups budget is $5.4m this year. 

Also note that our focus university program  is passing to Open Philanthropy.

3
Jack Lewars
2y
Hi Max - I took this from CEA's post here (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/), which aims for campus centres at 17 schools controlling "a multi-million dollar budget within three years of starting", and which Alex HT suggested in the comments would top out at $3m/year. This suggested a range of $17m-$54m.
6
MaxDalton
2y
Cool, I see where you got the figure from. But yeah, most of that work is passing to Open Philanthropy, so we don't plan to spend $50m/year.
3
Jack Lewars
2y
Thanks - I missed that update, and wouldn't have written about CEA above if I had seen it, I think.

Just wanted to add that I did a rough cost-effectiveness estimate of the average of all past movement building efforts using the EA growth figures here. I found an average of 60:1 return for funding and 30:1 for labour. At equilibrium, anything above 1 is worth doing, so I expect that even if we 10x the level of investment, it would still be positive on average.

I've done informal BOTECs and it seems like the current funding amounts are roughly correct, though we need to be careful with deploying this funding due to concerns like optics and epistemics. Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change. This seems like a failure of communication, because funding dinners is either clearly good and students weren't doing the BOTEC, or it's bad due to some optics or other concerns that the students didn't communicate to CEA.

7
Jack Lewars
2y
In the spirit of this post, maybe you could share these informal BOTECs? 'Here is a BOTEC' is going to help more than 'I've done a BOTEC and it checks out'. (I appreciate the post isn't actually aimed at you)
Mau
2y12
0
0

That's fair - I'm not the earlier commenter but would suggest (as someone who's heard some of these conversations but isn't necessarily representative of others' thinking):

For dinners: Suppose offering to buy a $15 dinner for someone makes it 10% more likely than they'll go to a group dinner, and suppose that makes it 1% more likely that they'll have a very impactful career. Suppose that means counterfactually donating 10% of $100k for 40 years. Then on average the dinner costs $15 and yields $400.

For retreats: Suppose offering to subsidize a $4oo flight makes someone 40% more likely to go to a retreat and that this makes them 5% more likely to have a very impactful career. Again suppose that means counterfactually donating 10% of $100k for 40 years. Then on average the flight costs $400 and yields $8,000.

(And expected returns are 100x higher than that under bolder assumptions about how much impact people will have. Although they're negative if optics costs are high enough.)

Thanks - this is exactly what I think is useful to have out there, and ideally to refine over time.

My immediate reaction is that the % changes you are assigning look very generous. I doubt a $15 dinner makes some 1% more likely to pursue an impactful career; and especially that a subsidised flight produced a 5% swing. I think these are likely orders of magnitude too high, especially when you consider that other places will also offer free dinners/retreats.

If a $400 investment in anything made someone 5% more likely to pursue an impactful career, that would be amazing.

But I guess what I'm really hoping is that CEA and FTX have exactly this sort of reasoning internally, with some moderate research into the assumptions, and could share that externally.

7
Mau
2y
Thanks! Agree it's good to refine these and that these are very optimistic - I suspect the optimism is justified by the track record of these events. Anecdotally, it seems nontrivially common for early positive interactions to motivate new community members to continue/deepen their (social and/or motivational) engagement, and that seems to often lead to impactful career plan changes. (I think there's steeply diminishing returns here--someone's first exposure to the community seems much more potentially impactful than later exposures. I tried to account for "how many participants will be having their first exposure" in the earlier estimate.) In other words, we could (say) break down the ~1% estimate (which is already conditioned on counterfactual dinner attendance) into the following (ignoring benefits for people who are early on but not totally new): * 30% chance that this is their first exposure * conditional on the above, 10% chance that the experience kickstarts long/deep engagement * conditional on the above, 50% chance of an impactful career switch (although early exposures that aren't quite the first one also seem valuable) If 1% is far too generous, which of the above factors are too high? (Maybe the second one?) (Edited to add) And yup, I acknowledge this isn't the source you were looking for - hopefully still adds to the conversation.
4
lukasb
2y
How much of the impact generated by the career change are you attributing to CEA spending here? I'm just wondering because counterfactuals run into the issue of double-counting (as discussed here). 

Unsure but probably more than 20% if the person wouldn't be found through other means. I think it's reasonable to say there are 3 parties: CEA, the group organizers, and the person, and none is replaceable so they get 33% Shapley each. At 2% chance to get a career change this would be a cost of 750k per career which is still clearly good at top unis. The bigger issue is whether the career change is actually counterfactual because often it's just a speedup.

4
Amalie Farestvedt
2y
I do think you have to factor in the potential negative risk of spending too much in that estimate as some of the potential members might be turned of by what seems like inefficient use of money. I think this is especially crucial if you are in the process of explaining the EA principles or when relating to members who not yet are committed to the movement.
1
Fermi–Dirac Distribution
2y
Is $750k the market price for 1 expected career change from someone at a top school, excluding compensation costs? Alternatively, is there no cheaper way to cause such a career change? IMO, this is the important question here: if there is a cheaper way, then paying $750k has an opportunity cost of >1 career changes. 
3
Thomas Kwa
2y
edit: misinterpreted what comment above meant by "market price" I think the market price is a bit higher than that. The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <3 year payback period even if the students take a couple years to graduate or skill up. In practice, dinners typically produce way more than 2% of a career change for $5k of dinners (33 dinners for 10 people at $15/serving). The situation at universities has non-monetary bottlenecks, like information transmission fidelity, qualified organizers, operational capacity, university regulations, etc., and most things that get you better use of those other resources and aren't totally extravagant are worth funding, unless they have a hidden cost like optics or attracting greedy people.

I think the market price is a bit higher than that.

Someone else in this thread found a report claiming that employers spend an average of ~$6,100 to hire someone at a US university.  I also found this report saying that the average cost per hire in the United States is <$5,000, $15k for an executive. At 1 career = 10 jobs that's $150,000/career for executive-level talent, or $180,000/career adjusting for inflation since the report was released. 

I'm not sure how well those numbers reflect reality (the $15k/executive number looks quite low), but it seems at least fairly plausible that the market price is substantially less than $750k/career. 

The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <2 year payback period.

This line of reasoning is precisely what I'm claiming to be misguided. Giving you a gallon of water to drink allows you to live at least two additional days (compared to you having no water), which at $750k of impact/year (~$2000/day ) means, by your reasoning, that EA should fund all int... (read more)

2
Thomas Kwa
2y
It looks like I misunderstood a comment above. I meant "market price" as the rate at which CEA should currently trade between money and marginal careers, which is >$750k. I think you mean the average price at which other companies "in the market for talent" buy career changes, which is <$750k. I think there isn't really a single price at which we can buy infinite talent. We should do activities as cost-effective as other recruiters, but these can only be scaled up to a limited extent before we run into other bottlenecks. The existence of a cheaper intervention doesn't mean we shouldn't fund a more expensive intervention once the cheaper one is exhausted. And we basically want an infinite amount of talent, so in theory the activities that buy career changes at prices between $150k and $750k are also worth funding. I think we can agree that * different activities have different cost-effectiveness, some of them substantially cheaper than $750k/career * we can use a basically infinite amount of talent, and the supply curve for career changes slopes upwards * we shouldn't pay more than the market price for any intervention e.g. throw $100k at a university group for dinners when it produces the same effect as $5k spent on dinners * we should fund every activity that has a cost-effectiveness of better than $750k per career change (or whatever the true number is), unless we saturate our demand for talent and lower the marginal benefit of talent, or deplete much of our money and increase the marginal benefit of money * we are unlikely to saturate our demand for talent by throwing more money at EA groups because there are other bottlenecks * Because most of the interventions are much cheaper than $750k/career change, our average cost will be much less than $750k/career change

Just a list of projects and organisations FTX has funded would be beneficial and probably much less time-consuming to produce. Some of the things you mention could be deducted from that, and it would also help in evaluating current project ideas and how likely they are to get funding from FTX at some point.

5
Jack Lewars
2y
True, and it seems like a necessary step on its own, but I'm wary of people 'deducing' too much. Right now, a lot of the anxiety seems to be coming from people trying to deduce what funders might be thinking; ideally, they'd tell people themselves.

I kind of like the general sentiment but I'm a bit annoyed that it's just assumed that your burden of proof is so strongly on the funders.

Maybe you want to share your BOTEC first, particularly given the framing of the post is "I want to see the numbers because I'm concerned" as opposed to just curiosity?

3
Jack Lewars
2y
I'm not sure why the burden wouldn't fall on people making the distribution of funds? (Incidentally, I'm using this to mean that the funders could also hire external consultancies etc. to produce this.) But, more to the point, I wrote this really hoping that both organisations would say "sure, here it is" and we could go from there. That might really have helped bring people together. (NB: I realise FTX haven't engaged with this yet.) In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

I think what I'm getting at is that burden of proof is generally an unhelpful framing, and an action that you could take that might be helpful is communicating your model that makes you sceptical of their spending.

Hiring consultancies to do this seems like it's not going to go well unless it's rethink priorities or they have lot of context and on the margin I think it's reasonable for CEA to say no, they have better things to do.

I feel confused about the following but I think that as someone that runs an EA org you could easily have reached out directly to CEA/FTX to ask this question (maybe you did, if so apologies) and this action seems kind of like outing them more than being curious. I'm not necessarily against this (in fact I think this is helpful in lots of ways) but many forum users seen to not like these kinds of adversarial actions.

1
Jack Lewars
2y
Like you, I'm fairly relaxed about asking people publicly to be transparent. Specifically in this context, though, someone from FTX said they would be open to doing this if the idea was popular, which prompted the post. As a sidenote, I think also that MEL consultancies are adept at understanding context quickly and would be a good option (or something that EA could found itself - see Rossa's comment). My wife is an MEL consultant, which informs my view of this. But that's not to say they are necessarily the best option.
1
calebp
2y
I as an individual would endorse someone hiring an MEL consultant to do this for the information value and would also bet on this not providing much value due to the analysis being poor at $100. Terms to be worked out of course, but if someone was interested in hiring the low context consultant, I'd be interested in working out the terms.
1
calebp
2y
Oh right, I didn't pick up on the ftx said they'd like to see if this was popular thing. This resolves part of this for me (at least on the ftx as opposed to the CEA side).
7
calebp
2y
Broken into a different comment so people can vote more clearly I think that there is likely different epistemic standards between cause areas such that this is a pretty complicated question and people underpreciate how much of a challenge this is for the EA movement.
2
freedomandutility
2y
I think it makes sense to have the burden of proof mostly on the funders given that they presumably have more info about all their activities, plus having the burden set this way has instrumental benefits of encouraging transparency which could lead to useful critiques, and extra reputation-related incentives to use good reasoning and do a good job of judging what grants do and do not meet a cost-effectiveness bar.

Just noticed Sam Bankman-Fried's 80,000 Hours podcast episode where he sheds some light on his thinking in this regard.

I think the excerpt below is not far from the OP's request that "if there is no BOTEC and it's more 'this seems plausibly good and we have enough money to throw spaghetti at the wall', please say that clearly and publicly."

Sam:

I think that being really willing to give significant amounts is a real piece of this. Being willing to give 100 million and not needing anything like certainty for that. We’re not in a position where we’re like, “If

... (read more)
1
Jack Lewars
2y
Very interesting, thanks. I read this as more saying 'we need to be prepared to back unlikely but potentially impactful things', and acknowledging the uncertainty in longtermism, rather than saying 'we don't think expected value is a good heuristic for giving out grants', but I'm not confident in that reading. Probably reflects my personal framing more than anything else.
5
Holly Morgan
2y
Oh, I read it as more the former too! I read your post as: 1. Asking if FTX have done something as explicit as a BOTEC for each grant or if it's more a case of "this seems plausibly good" (where both use expected value as a heuristic) 2. If there are BOTECs, requesting they write them all up in a publicly shareable form 3. Implying that the larger the pot, the more certain you should be ("these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA.") I thought Sam's comments served as partial responses to each of these points. You seem to be essentially challenging FTX to be a lot more certain about the impact of their grants (tell us your reasoning so we can test your assumptions and help you be more sure you're doing the right thing, hire more staff like Open Phil so you can put a lot more work into these evaluations, reduce the risk of potential downsides because they're pretty bad) and Sam here essentially seems to be responding "I don't think we need to be that certain." I can't see where the expected value heuristic was ever called into question? Sorry if you thought that's how I was reading this. [Edit: Maybe when you say "plausibly good" you mean "negative in expectation but a decent chance of being good", whereas I read it as "good in expectation but not as the result of an explicit BOTEC"? That might be where the confusion lies. If so, with my top-level comment I was trying to say "This is why FTX might be using heuristics that are even rougher than BOTECs and why they have a much smaller team than Open Phil and why they may not take the time to publish all their reasoning" rather than "This is why they might not be that bothered about expected value and instead are just funding things that might be good". Hope that makes sense.]

I would prefer that they be less transparent so they don't have to waste their valuable time.

I strongly agree we need transparency. In lieu of democracy in funding, orgs need to be accountable to the movement in some way.

Also, what's a BOTEC?

6
Jack Lewars
2y
I've updated this now: it's a Back Of The Envelope Calculation.

Back when LEAN was a thing we had a model of the value of local groups based on the estimated # of counterfactual actively engaged EAs, GWWC pledges and career changes, taking their value from 80,000 Hour $ valuations of career changes of different levels. 

The numbers would all be very out of date now though,  and the EA Groups Surveys post 2017 didn't gather the data that would allow this to be estimated.

Good questions, I have ended up thinking about many of these topics ofren.

Something else where I would find improved transparency valuable would be what are the back of envelope calcs and statistics for denied fundings. Reading EA funds reports for example doesn't give a total view into where the current bar for interventions is, because we're only seeing the project distribution from above the cutoff point.

Downvoted because of the clickbait title and the terrible formatting

I know this isn't the central part of the post but I'm not sure the title is really clickbait.  It seems like an accurate headline to me? I understand clickbait to be "the intentional act of over-promising or otherwise misrepresenting — in a headline, on social media, in an image, or some combination — what you’re going to find when you read a story on the web."  Source.

A real clickbait title for this would be something like "The one secret fact FTX doesn't want you to know" or "Grantmakers hate him! One weird trick to make spending transparent" 

Personally, I don't have a problem with the title. It clearly states the central point of the post. 

Not long enough for the formatting to matter in my opinion. We can, and should, encourage people to post some low-effort posts, as long as they're an original thought.

One of the the EA forum norms that I like to see is people explaining why they downvoted a post/comment so I'm a bit annoyed that NegativeNuno's comment that supported this norm was fairly heavily downvoted (without explanation).

7
Jack R
2y
I disagree with your reasons for downvoting the post, since I generally judge posts on their content, but I do appreciate your transparency here and found it interesting to see that you disliked a post for these reasons. I’m tempted to upvote your comment, though that feels weird since I disagree with it