All of Benjamin_Todd's Comments + Replies

Is effective altruism growing? An update on the stock of funding vs. people

I made a mistake in counting the number of committed community members.

I thought the Rethink estimate of the number of ~7,000 'active' members was for people who answered 4 or 5 out of 5 on the engagement scale in the EA survey, but actually it was for people who answered 3, 4 or 5.

The number of people who answered 4 or 5 is only ~2,300.

I've now added both figures to the post.

Is effective altruism growing? An update on the stock of funding vs. people

Hi Aidan, the short answer is that global poverty seems the most funding constrained of the EA causes. The skill bottlenecks are most severe in longtermism and meta e.g. at the top of the 'implications section' I said:

The existence of a funding overhang within meta and longtermist causes created a bottleneck for the skills needed to deploy EA funds, especially in ways that are hard for people who don’t deeply identify with the mindset.

That said, I still thinking global poverty is 'talent constrained' in the sense that:

  • If you can design something that's sev
... (read more)
3Aidan Alexander4dThank you for your response! Makes sense. I'm not 100% convinced on the last point, but a few of your articles and 80k podcast appearances have definitely shifted me from thinking that E2G is unambiguously the best way for me to maximise the amount of near-term suffering I can abate, to thinking that direct work is a real contender. So thanks!!
How are resources in EA allocated across issues?

I agree that figure is really uncertain. Another issue is that the mean is driven by the tails.

For that reason, I mostly prefer to look at funding and the percentage of people separately, rather than the combined figure - though I thought I should provide the combined figure as well.

On the specifics:

I'd guess >20 people pursuing direct work could make >$10 million per year if they tried earning to give

That seems plausible, though jtbc the relevant reference class is the 7,000 most engaged EAs rather than the people currently doing (or about to start doing) direct work. I think that group might in expectation donate several fold-less than the narrower reference class.

Gifted $1 million. What to do? (Not hypothetical)

2) I agree you should consider your future income, the percentage should be calculated as a percentage of current assets + NPV of future income.

 

1) I agree the approach of "work out if the community is above or below the optimum level of investing vs. saving, and then either donate everything, or save everything" makes a lot of sense for small donors. I'd feel pretty happy if someone wanted to try do that.  (Another factor is that it could be a good division of labour for some to specialise in giving soon and some specialise in investing.)

But I f... (read more)

5Linch22dThanks so much for the fast and detailed response! Sorry I should be clearer. Ben Wilder probably shouldn't donate all of his money this year. EA is a marathon, not a sprint, and it's good to take advantage of (eg) learning more and worldview shifts. I just don't think optimal philanthropic timing is the main consideration here. Agreed, I think there's a lot to recommend for this approach, including but not limited to if this search works out well, relevant information can be passed along to the rest of this community.
Is effective altruism growing? An update on the stock of funding vs. people

The estimates are aiming to take account of the counterfactual i.e. when I say "that person generates value equivalent to extra donations of $1m per year to the movement", the $1m is accounting for the fact that the movement has the option to hire someone else.

In practice, most orgs are practicing threshold hiring, where if someone is clearly above the bar, they'll create a new role for them (which is what we should expect if there's a funding overhang).

Gifted $1 million. What to do? (Not hypothetical)

The advice below is only about where to donate it, but that's only one of the key questions.

It's also worth thinking hard about how much you want to give, and how to time your giving.

Even if you decide you want to use the entire amount for good, Phil Trammel's model about giving now vs. giving later suggests that you should donate x% of the capital per year, where x% is mainly given by your discount rate. In general people think the community as a whole should donate 1-10% per year, so I'd suggest, as a starting point, you could pick a percentage in that r... (read more)

(I agree with many points in this answer. But for communication succinctness and because this answer is highly upvoted, I will only point out my disagreements)

In general people think the community as a whole should donate 1-10% per year, so I'd suggest, as a starting point, you could pick a percentage in that range to donate each year.


There are lots of complications[...].But I think a good prior is to donate something close to the optimal percentage for the community as a whole

I don't share this intuition at all fwiw, because of two considerations:

1) From ... (read more)

I like that you suggest that people should give what they are comfortable giving. 

I think that's advice I'd want someone to give my friend and I think it's wiser in the long term.

How are resources in EA allocated across issues?

Seems reasonable.

Salaries are also lower than in AI.

You could make a similar argument about animal welfare, though, I think.

Is effective altruism growing? An update on the stock of funding vs. people

Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work?

 

It depends on what you mean by 'entry level' & relative fit in each path, but the short answer is yes. 

If someone was earning $1m per year and didn't think that might grow a lot further from there, I'd encourage them to seriously consider switching to direct work. 

I.e. I think it would be worth doing a round of speaking to people at the key orgs, making applications and exploring options for several months (esp insofar as that can be done w... (read more)

1tylermaule1moThat all seems reasonable. Shouldn’t the displacement value be a factor though? This might be wrong, but my thinking is (a) the replacement person in the $1M job will on average give little or nothing to effective charity (b) the switcher has no prior experience or expertise in non-profit, so presumably the next-best hire there is only marginally worse?
Get 100s of EA books for your student group

I think I disagree with those fermis for engagement time.

My prior is that in general, people are happier to watch videos than to read online articles, and they're happier to read online articles than to read books. The total time per year spent reading books is pretty tiny.  (Eg I think all time spent reading DGB is about 100k hours, which is only ~1yr of the 80k podcast or GiveWell's site.)

I expect that if you sign someone up to a newsletter and give them a book at the same time, they're much more likely to read a bunch of links from the newsletter t... (read more)

6Habryka1moThis also matches my model. I think book completion rates are quite low, and I expect book distribution without followup to have very little effect. In my Fermis this can make book distribution still come out reasonably high, but it doesn't tend to come out competitive with the best other interventions I've thought of. I think there are ways to increase completion and followup rates, mostly by getting people to give books to their friends instead of doing broad distributions, but that also tends to be a bit harder to scale.
Get 100s of EA books for your student group

Seems like they've changed the form to be the 2017 career guide.

It might also be worth noting that some of the books on the list have a track record of getting great people into EA, while most of them don't. I expect the EV of getting someone to read DGB or the Precipice is over 10x the EV of many of the other books on the list.

Get 100s of EA books for your student group

It's great you're helping  make this easier.

One quick thought: while handing out a book at a talk is probably net positive, I expect eg getting someone onto your mailing list will be significantly better, because then you can then tell them about future events. 

Getting someone into EA usually takes several years, so my guess is that we should use free books to get people to fill out feedback forms, sign up to fellowships, join mailing lists, ask people to make referrals, and things like that - more than just handing them out.

Asking for something ... (read more)

To me it sounds like you're underestimating the value of handing out books: I think books are great because you can get someone to engage with EA ideas for ~10 hours, without it taking up any of your precious time.

As you said, I think books can be combined with mailing lists. (If there was a tradeoff, I would estimate they're similarly good: You can either get a ~20% probability of getting someone to engage for ~10h via a book, or a ~5%(? most people don't read newsletters) probability of getting someone to engage for ~40h via a mailing list. And while I'd... (read more)

Economic policy in poor countries

We put this question to Alexander Berger in our recent podcast. Best to engage with his response directly, but the very short version of his response was that they do expect to be able to find some opportunities that are even more leveraged than AMF within policy (AMF is 20x leveraged on cash transfers, but maybe 100x or more is possible), and that's why they're currently hiring for someone to work on each of SE Asian air quality advocacy and advocacy for more effective international aid, and also people to look for other areas like this. Though, my impres... (read more)

4Halstead1moHello, yes this was in part a response to the arguments there where he suggested that policy is in the same ballpark as GiveWell top charities, which I don't think can be true given other things he says. "Yeah, I think that’s totally right. And I think, again, if you had more of a dominance argument, where it’s like, look, the returns to the policy are just always going to outweigh the returns to evidence-based aid, then I think you would end up with more back and forth and debate between them. But when you see the arguments for cost effectiveness actually ending up in the same ballpark, the same universe, it’s just like, cool, we can all get along. People are going to sort into buckets that appeal to them, or styles that work for them, or interventions that they’re more personally excited about. I think that’s totally healthy. And then when some of them think, actually, the expected value-type argument seems to lead you to think one of these is going to just totally destroy the other, that’s where I think you get a little bit more friction and tension and debate sometimes." Because there is little public discussion about what happens in the near-termist area, it is difficult to know why certain decisions are taken. I think it would be better for decisions that affect millions of dollars to be made with more public discussion and scrutiny.
Is effective altruism growing? An update on the stock of funding vs. people

On b), for exactly that reason, our donors at least usually focus more on the opportunity costs of the labour input to 80k rather than our financial costs - looking mainly at 'labour out' (in terms of plan changes) vs. 'labour in'. I think our financial costs are a minority of our total costs.

On a), yes, you'd need to hope for a better return than a doubling leads to +10% labour estimate I made.

If we suppose a 20% increase is sufficient for +10% labour, then the new situation would be:

Total costs: $1.32m

Impact: $11m

So, the excess value has increased from $... (read more)

1tylermaule1moGood points, I agree it would be better to undershoot. Still, even with the pessimistic assumptions, the high end of that $0.4-4M range seems quite unlikely. Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work? If so, that would be a major update to my thinking.
Lessons from Running Stanford EA and SERI

That's great to hear!

I should have clarified my points weren't meant as disagreements - I think we're basically on the same page.

I do think aggressive 80/20ing often makes sense

Yes, I agree. One way to reconcile the two comments is that you need to focus on the 20% of most valuable activities within each aspect (marketing, ops, follow up), but you can't drop any aspect. I also agree that it's likely that 'really focusing on what drives impact' is more important than 'really caring', though I think simply caring and trying can go a fairly long way.

On living... (read more)

Lessons from Running Stanford EA and SERI

Thinking out loud / random comments:

  1. In my experience, running local group events was like an o-ring process. If you're running a talk, you need to get the marketing right, the operations right, and the follow up right. If you miss any of these, you lose most of the value. This means that having an organiser who is really careful about each stage can dramatically increase the impact of the group. So, I'd highlight 'really caring' as one of the key traits to have.
     
  2. I think one-off talks can be powerful, but they have to be combined with one-on-one follow
... (read more)

Great points, thanks for commenting Ben!  Responding to each of the points: 

In my experience, running local group events was like an o-ring process. If you're running a talk, you need to get the marketing right, the operations right, and the follow up right. If you miss any of these, you lose most of the value. This means that having an organiser who is really careful about each stage can dramatically increase the impact of the group. So, I'd highlight 'really caring' as one of the key traits to have.

I think I mostly agree with this (and strongly... (read more)

Lessons from Running Stanford EA and SERI

It seems like the impact of running a local group well is often underappreciated, but I think it's one of the highest-impact things you can do as a student or recent graduate, and also one of the highest-impact volunteer opportunities.

It's great to have this write up making a more detailed case. I recently released this stub profile on running a local group, and have added a link to this post.

Is effective altruism growing? An update on the stock of funding vs. people

Not sure I follow the maths.

 

If there are now 10 staff, each paid $100k, and each generating $1m of value p.a., then the net gain is $10m - $1m = $9m. The CBR is 1:9.

 

If we double salaries and get one extra staff member, we're now paying $2.2m to generate $11m of value. The excess is $8.8m. The average CBR has dropped to 5:1, and the CBR of the marginal $1.2m was actually below 1.

1tylermaule1moAgreed, just a function of how many salaries you assume will have to be doubled alongside to fill that one position (a) Hopefully, doubling ten salaries to fill one is not a realistic model. Each incremental wage increase should expand the pool of available labor. If the EA movement is labor-constrained, I expect a more modest raise would cause supply to meet demand. (b) Otherwise, we should consider that the organization was paying only half of market salary, which perhaps inflated their ‘effectiveness’ in the first place. Taking half of your market pay is itself an altruistic act, which is not counted towards the org’s costs. Presumably if these folks chose that pay cut, they would also choose to donate much of their excess salary (whether pay raise from this org, or taking a for-profit gig).
Is effective altruism growing? An update on the stock of funding vs. people

I'm just saying that when we think offering more salary will help us secure someone, we generally do it. This means that further salary raises seem to offer low benefit:cost. This seems consistent with econ 101.

Likewise, it's possible to have a lot of capital, but for the cost-benefit of raising salaries to be below the community bar (which is something like invest the money for 20yr and spend on OP's last dollar - which is a pretty high bar).  Having more capital increases the willingness to pay for labour now to some extent, but tops out after a poi... (read more)

Is effective altruism growing? An update on the stock of funding vs. people

I definitely agree EAs are motivated somewhat by money in this range. 

My thought is more about how it compares to other factors.

My impression of hiring at 80k is that salary rarely seems like a key factor in choosing us vs. other orgs (probably under 20% of cases). If we doubled salaries, I expect existing staff would save more, donate more, and consume a bit more; but I don't think we'd see large increases in productivity or happiness.

My impression is that this is similar at other orgs who pay similarly to us. Some EA orgs still pay a lot less, and I... (read more)

Agree that we shouldn't expect large productivity/wellbeing changes. Perhaps a ~0.1SD improvement in wellbeing, and a single-digit improvement in productivity - small relative to effects on recruitment and retention.

I agree that it's been good overall for EA to appear extremely charitable. It's also had costs though: it sometimes encouraged self-neglect, portrayed EA as 'holier than thou', EA orgs as less productive, and EA roles as worse career moves than the private sector. Over time, as the movement has aged, professionalised, and solidified its funding... (read more)

Is effective altruism growing? An update on the stock of funding vs. people

This is a big topic, and there are lots of factors.

One is that paying very high salaries would be a huge PR risk.

That aside, the salaries are many orgs are already good, while the most aligned people are not especially motivated by money. My sense is that e.g. doubling the salaries from here would only lead to a small increase in the talent pool (like maybe +10%).

Doubling costs to get +10% labour doesn't seem like a great deal - that marginal spending would be about a tenth as cost-effective as our current average. (And that's ignoring the PR and cultural costs.)

Some orgs are probably underpaying, though, and I'd encourage them to raise salaries.

This kind of ambivalent view of salary-increases is quite mainstream within EA, but as far as I can tell, a more optimistic view is warranted.

If 90% of engaged EAs were wholly unmotivated by money in the range of $50k-200k/yr, you'd expect >90% of EA software engineers, industry researchers, and consultants to be giving >50%, but much fewer do. You'd expect EAs to be nearly indifferent toward pay in job choice, but they're not. You'd expect that when you increase EAs' salaries, they'd just donate a large portion on to great tax-deductible charities, ... (read more)

4tylermaule1moI agree in principal, but in this case the alternative is eliminating$400k-4M of funding, which is much more expensive than doubling the salary of e.g. a research assistant. To be clear, I am more so skeptical of this valuation than I am actually suggesting doubling salaries. But conditional on the fact that one engaged donor entering the non-profit labor force is worth >$400k, seems like the right call.
Denise_Melchin's Shortform

I was thinking of donating 10% vs. some part time work / side projects.

I agree that someone with the altruism willing to donate say 50% of their income but who isn't able to get a top direct work job could donate more like $10k - $100k per year (depending on their earning potential, which might be high if they're willing to do something like real estate, sales or management in a non-glamorous business).

Though I still feel like there's a good chance there's someone that dedicated and able could find something that produces more impact than that, given the f... (read more)

2Denise_Melchin1moThank you for providing more colour on your view, that's useful!
Is effective altruism growing? An update on the stock of funding vs. people

Thanks! I probably should have just used the 2020 figure rather than the 2017-2019 average.

My estimate was an $80m allocation by Open Phil to global health, but this would suggest $100m.

Denise_Melchin's Shortform

That makes sense, thanks for the comment. 

I think you're right looking at ex post doesn't tell us that much.

If I try to make ex ante estimates, then I'd put someone pledging 10% at a couple of thousand dollars per year to the EA Funds or equivalent. 

But I'd probably also put similar (or higher) figures on the value of the other ways of contributing above.

2Denise_Melchin1moI am still confused whether you are talking about full-time work. I'd very much hope a full-time community builder produces more value than a donation of a couple of thousand dollars to the EA Funds. But if you are not discussing full-time work and instead part-time activities like occasionally hosting dinners on EA related themes it makes sense to compare this to 10% donations (though I also don't know why you are evaluating 10% donations at ~$2000, median salary in most rich countries is more than 10 times that). But then it doesn't make sense to compare the 10% donations and part-time activities to the very demanding direct work paths (e.g. AI safety research). Donating $2000 (or generally 10%, unless they are poor) requires way less dedication than fully focussing your career on a top priority path. Someone who would be dedicated enough to pursue a priority path but is unable to should in many cases be able to donate way more than $2000. Let's say they are "only" in the 90th percentile for ability in a rich country and will draw a 90th percentile salary, which is above £50,000 in the UK (source [https://www.gov.uk/government/statistics/percentile-points-from-1-to-99-for-total-income-before-and-after-tax] ). If they have the same dedication level as someone in a top priority path they should be able to donate ~£15,000 of that. That is 10 times as much as $2000!
Denise_Melchin's Shortform

Very quick comment: I think I feel this intuition, but when I step back, I'm not sure why potential to contribute via donations should reduce more slowly with 'ability' than potential to contribute in other ways. 

If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).

I wonder if people who can't do the top direct work jobs wouldn't be able to have more impact by worki... (read more)

6Gregory_Lewis1moAlthough I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries: First, the ex ante 'expected $ raised' from folks aiming at E2G (e.g. at a similar early career stage) is much more even than the ex post distribution. Algo-trader Alice and Entrepreneur Edward may have similar expected lifetime income, but Edward has much higher variance - ditto one of entrepreneur Edward and Edith may swamp the other if one (but not the other) hits the jackpot. Second, part of the reason direct work contributions look more even is this is largely an ex ante estimate - a clairvoyant ex post assessment would likely be much more starkly skewed. E.g. If work on AI paradigm X alone was sufficient to avert existential catastrophe (which turned out to be the only such danger), the impact of the lead researcher(s) re. X is astronomically larger than everything else everyone else is doing. Third, I also wonder that raw $ value may mislead in credit assignment for donation impact. The entrepreneur who makes a billion $ company hasn't done all the work themselves, and it's facially plausible some shapley/whatever credit sharing between these founders and (e.g.) current junior staff would not be as disproportionate as the money which ends up in their respective bank accounts. Maybe not: perhaps the reward in terms of 'getting things off the ground', taking lots of risk, etc. do mean the tech founder megadonor bucks should be attributed ~wholly to them. But similar reasoning could be applied to direct work as well. Perhaps the lion's share of all contributions for global health work up to now should be accorded to (e.g.) Peter Singer, as all subsequent work is essentially 'footnotes to Famine, Affluence, and Morality'; or AI work to those who toiled in the vineyards over a
2Denise_Melchin1moThe first thing that comes to mind here is that replaceability is a concern for direct work, but not for donations. Previously, the argument has been that replaceability does not matter as much for the very high impact roles as they are likely heavy tailed and therefore the gap between the first and second applicant large. But that is not true anymore once you leave the tails, you get the full impact from donations but less impact from direct work due to replaceability concerns. This also makes me a bit confused about your statement that income is unusually heavy-tailed compared to direct work - possibly, but I am specifically not talking about the tails, but about everyone who isn't in the top ~3% for "ability". Or looking at this differently: for the top few percent we think they should try to have their impact via direct work first. But it seems pretty clear (at least I think so?) that a person in the bottom 20% percentile in a rich country should try to maximise income to donate instead of direct work. The crossover point where one should switch from focusing on direct work instead of donations therefore needs to be somewhere between the 20% and 97%. It is entirely possible that it is pretty low on that curve and admittedly most people interested in EA are above average in ability, but the crossover point has to be somewhere and then we need to figure out where. For working in government policy I also expect only the top ~3% in ability have a shot at highly impactful roles or are able to shape their role in an impactful way outside of their job description. When you talk about advocacy I am not sure whether you still mean full-time roles. If so, I find it plausible that you do not need to be in the top ~3% for community building roles, but that is mostly because we have plenty of geographical areas where noone is working on EA community building full-time, which lowers the bar for having an impact.
How are resources in EA allocated across issues?

Yes, sorry I was using 'global health' as a shorthand to include 'and development'.

For other near term, that category was taken from the EA survey, and I'm also unsure exactly what's in there. As David says, it seems like it's mostly mental health and climate change though.

How are resources in EA allocated across issues?

Yes, I agree. Different worldviews will want to spend a different fraction of their capital each year. So the ideal allocation of capital could be pretty different from the ideal allocation of spending. This is happening to some degree where GiveWell's neartermist team are spending a larger fraction than the longtermist one.

How are resources in EA allocated across issues?

If lots of the people working on 'other GCRs' are working on great power conflict, then the resources on broad longtermism could be higher than the 1% I suggest, but I'd expect it's still under 3%.

Most research/advocacy charities are not scalable

I should have probably have just said that OP seem very interested in the last dollar problem (and that's ~60% of grantmaking capacity).

Agree with your comments on meta.

With cause pri research, I'd be trying to think about how much more effectively it lets us spend the portfolio e.g. a 1% improvement to $420 million per year is worth about $4.2m per year.

How are resources in EA allocated across issues?

Though, to be clear, I think this is only a moderate reason (among many other factors) in favour of donating to global health vs. say biosecurity.

Overall, my guess is that if someone is interested in donating to biosecurity but worried about the smaller existing workforce, then it would be better to:

  1. Fund movement building efforts to build the workforce
  2. Invest the money and donate later when the workforce is bigger
Towards a Weaker Longtermism

Sure, though I still think it makes it misleading to say that the survey respondents think "EA should focus entirely on longtermism". 

Seems more accurate to say something like "everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies."

8RyanCarey2moAgree it's more accurate. How I see it: > Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.

To be clear, my primary reason for why EA shouldn't entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn't the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.

To some degree my response to this situation is "let's create a separate longtermist community, so that I can indeed invest in that in a way that doesn't get diluted with all the other things that seem relatively unimportant to me". If we ha... (read more)

Towards a Weaker Longtermism

I agree it's not entailed by that, but both Will and Toby were also in the Leaders Forum Survey I linked to. From knowing them, I'm also confident that they wouldn't agree with "EA should focus entirely on longtermism".

Towards a Weaker Longtermism

It would indeed be ironic - the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment.

5jackmalde2moI don't think it's necessarily clear that incorporating moral uncertainty means you have to support hedging across different plausible views. If one maximises expected choiceworthiness [https://onlinelibrary.wiley.com/doi/abs/10.1111/nous.12264](MEC) for example one can be fanatically driven by a single view that posits an extreme payoff (e.g. strong longtermism!). Indeed MacAskill and Greaves have argued that strong longtermism seems robust to variations in population axiology and decision theory whilst Ord has argued reducing x-risk is robust to normative variations (deontology, virtue ethics, consequentialism). If an action is robust to axiological variations this can also help it dominate other actions, even under moral uncertainty.
Towards a Weaker Longtermism

I was talking about the EA Leaders Forum results, where people were asked to compare dollars to the different EA Funds, and most were unwilling to say that one fund was even 100x higher-impact than another; maybe 1000x at the high end. That's rather a long way from 10^23 times more impactful.

4Davidmanheim2moGood points, but if I understand what you're saying, that survey was asking about specific interventions funded by those funds, given our epistemic uncertainties, not the balance of actual value in the near term versus the long term, or what the ideal focus should be if we found the optimal investments for each.

Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area  is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.

Most research/advocacy charities are not scalable

I'd be happy to see more going to meta at the margin, though I'd want to caution against inferring much from how much the EA Infastructure Fund has available right now. 

The key question is something like "can they identify above-the-bar projects that are not getting funded otherwise?"

I believe the Infrastructure team has said they could fund a couple of million dollars worth of extra projects, and if so, I hope that gets funded.

Though even that also doesn't tell us much about the overall situation. Even in a world with a big funding overhang, we should expect there to be some gaps.

How are resources in EA allocated across issues?

Good point, I agree that's a factor. 

We should want funding to go into areas where there is more existing infrastructure / it's easier to measure results / there are people who already care about the issue.

Then aligned people should focus on areas that don't have those features.

It's good to see this seems to be happening to some degree!

7Benjamin_Todd2moThough, to be clear, I think this is only a moderate reason (among many other factors) in favour of donating to global health vs. say biosecurity. Overall, my guess is that if someone is interested in donating to biosecurity but worried about the smaller existing workforce, then it would be better to: 1. Fund movement building efforts to build the workforce 2. Invest the money and donate later when the workforce is bigger
How are resources in EA allocated across issues?

My hope is that someone with more time to do it carefully will be able to do this in the future.

Having on-going metaculus forecasts sounds great too.

Towards a Weaker Longtermism

No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.

You say:

the "strong longtermism" camp, typified by Toby Ord and Will MacAskill, who seem to imply that Effective Altruism should focus entirely on longtermism. 

They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what's best at the margin; the... (read more)

Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place. 

Also- I think the author would be able to avoid what they see as a "non-rigorous" decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an u... (read more)

1Darius_Meissner2moI'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization [https://measuringshadowsblog.blogspot.com/2015/08/multiplicative-factors-in-games-and.html] as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me. 

As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.

It's tricky because the paper on strong longt... (read more)

8Jack R2moI don’t think your point about Toby’s GDP recommendation is inconsistent with David’s claim that Toby/Will seem to imply “Effective Altruism should focus entirely on longtermism” since EA is not in control of all of the world’s GDP. It’s consistent to recommend EA focus entirely on longtermism and that the world spend .1% of GDP on x-risk (or longtermism).

No-one says longtermist causes are astronomically more impactful.

Not that it undermines your main point - which I agree with, but a fair minority of longtermists certainly say and believe this.

Towards a Weaker Longtermism

Are there two different proposals?

  1. Construct a value function = 0.5* (near term value) + 0.5* (far future value), and do what seems best according to that function.
  2. Spend 50% of your energy on the best longtermist thing and 50% on the best neartermist thing. (Or as a community, half of people do each.)
     

I think Eliezer is proposing (2), but David is proposing (1). Worldview diversification seems more like (2).

I have an intuition these lead different places – would be interested in thoughts.

Edit: Maybe if 'energy' is understood as 'votes from your parts' then (2) ends up the same as (1).

6elliottthornley2moI remember Toby Ord gave a talk at GPI where he pointed out the following: Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B. Diagram [https://drive.google.com/file/d/1NWjIwdv1zgGz6Bh1u8jq6AXzbYbb0UdO/view?usp=sharing]
9Davidmanheim2moAhh - thanks. Yes, if that is what Eliezer is proposing, my above response misunderstood him - but either I misunderstood something, or it would be inconsistent with how I understood his viewpoint elsewhere about why we want to be coherent decision makers.
Most research/advocacy charities are not scalable

Yes - part of the reason this the funding overhang dynamic is happening in the first place is that it's really hard to think of a project that has a clearly net positive return from a longtermist perspective, and even harder to put it into practice.

Most research/advocacy charities are not scalable

Yes, I wouldn't say CSET is a mega project, though more CSET-like things would also be amazing.

Most research/advocacy charities are not scalable

Yes, basically - if you're starting a new project, then all else equal, go for the one with highest potential total impact.

Instead, people often focus on setting up the most cost-effective project, which is a pretty different thing.

This isn't a complete model by any means, though :) Agree with what Lukas is saying below.

Most research/advocacy charities are not scalable

I agree this is a big issue, and my impression is many grantmakers agree.

In longtermism, I think the relevant benchmark is indeed something like OP's last dollar in the longtermism worldview bucket. Ideally, you'd also include the investment returns you'll earn between now and when that's spent. This is extremely uncertain.

Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don't think it's ... (read more)

2Linch2moHmm I'd love to see some survey results or a more representative sample. I often have trouble telling whether my opinions are contrarian or boringly mainstream! I wonder if this is better or worse than buying up fractions of AI companies? I think I agree, but I'm not confident about this, because this feels maybe too high-level? "1 unit" seems much more heterogeneous and less fungible when the resources we're thinking of is "people" or (worse) "conceptual breakthroughs" (as might be the case for cause prio work), and there are lots of ways that things are in practice pretty hard to compare, including but not limited to sign flips.
Most research/advocacy charities are not scalable

Agree with this. I just want to be super clear that I think entrepreneurs should optimise for something like cost-effectiveness x scale.

I think research & advocacy orgs can often be 10x more cost-effective than big physical projects, so a $10m research org might be as impactful as a $100m physical org, so it's sometimes going to be the right call.

But I think the EA mindset probably focuses a bit too much on cost-effectiveness rather than scale (since we approach it from the marginal donor perspective rather than the entrepreneur one). If we're also lea... (read more)

The reason most EA founders (and aspiring founders) act as if money is scares, is because the lived experience of most EA founders is that money is hard to get. As far as I know, this is true in all cause areas, including long-termism.

Epistemic status: Moderate opinion, held weakly.

I think one thing that people, both in and outside of EA orgs, find confusing is that we don't have a sense of how high the standards of marginal cost-effectiveness ought to be before it's worth scaling at all. Related concepts include "Open Phil's last dollar" and  "quality standards/"

In global health I think there's a clear minimal benchmark (something like "$s given to GiveDirectly at >10B/year scales"), but it's not clear I think whether people should bother creating scalable charities that are sl... (read more)

cost-effectiveness x scale

So just total impact?

Impact Certificates on a Blockchain

Glad you're thinking about this!

I've never had much luck myself trying to fundraise just by posting to the forum. Just in case you're not already, I'd suggest trying to approach some potential purchasers in the $1-$10m range directly via email.

1RowanBDonovan2moThanks! Yeah, and charities whose buy-/sell-in would be important. I’ll start tracking my leads more systematically.
Is effective altruism growing? An update on the stock of funding vs. people

I agree there are lots of forms of useful research that could feed into this, and in general better ideas feels like a key bottleneck for EA. I'm excited to see more 'foundational' work and disentanglement as well. Though I do feel like at least right now there's an especially big bottleneck for ideas for specific shovel ready projects that could absorb a lot of funding.

Is effective altruism growing? An update on the stock of funding vs. people

Ah good point. I only found the metaculus questions recently and haven't thought about them as much.

Is effective altruism growing? An update on the stock of funding vs. people

One extra thought is that there was a longtermist incubator project for a while, but they decided to close it down. I think one reason was they thought there weren't enough potential entrepreneurs in the first place, so the bigger bottleneck was movement growth rather than mentoring. I think another bottleneck was having an entrepreneur who could run the incubator itself, and also a lack of ideas that can be easily taken forward without a lot more thinking. (Though I could be mis-remembering.)

1tamgent2moI think they were pretty low profile, and the types of things that Jan-WillemvanPutten is suggesting are about being more present/visible in EA in order to attract a subculture to develop more. I think this example supports his main point more actually, because movement growth is quite driven by culture and attractors for different subcultures. (As an aside, I was engaged with the longtermist incubator and found it helpful/useful.) (Another aside, I can think of a few downsides of Jan-WillemvanPutten's specific suggestion, but I think the important part is the visibility and culture building aspect.)
Is effective altruism growing? An update on the stock of funding vs. people

An extra thought is that this seems like a positive update on the cost-effectiveness of past meta work.

Here's a rough and probably overoptimistic back of the envelope to illustrate the idea:

  • I'd guess that maybe $50m was spent on formal movement building efforts in 2020. This is intended to include things like OP & GiveWell's spending on staff, most of FHI and MIRI, plus all of the explicit movement building orgs like CEA and 80k. If that started at 0 in 2010, then it might add up to $250m over the decade (assuming straight line growth).

  • If the ave

... (read more)
Load More