Benjamin_Todd

Wiki Contributions

Comments

Is effective altruism growing? An update on the stock of funding vs. people
  1. Yes - I wasn't trying to distinguish between the two.

  2. Probably best to think of it as the estimate for 2020 (specifically, it's based on the number of EA survey respondents in the 2019 survey vs. the 2020 survey).

This estimate is just based on one method. Other methods could yield pretty different numbers. Probably best to think of the range as something like -5% to 30%.

Is effective altruism growing? An update on the stock of funding vs. people

That toy model is similar to Phil's, so I'd start by reading his stuff. IIRC with log utility the interest rate factors out. With other functions, it can go either way.

However, if your model is more like impact = log(all time longtermist spending before the hinge of history), which also has some truth to it, then I think higher interest rates will generally make you want to give later, since they mean you get more total resources (so long as you can spend it quickly enough as you get close to the hinge).

I think the discount rate for the things you talk about is probably under 1% per year, so doesn't have a huge effect either way. (Whereas if you think EA capital is going to double again in the next 10 years, then that would double the ideal percentage to distribute.)

Is effective altruism growing? An update on the stock of funding vs. people

It's a very difficult question. 3% was just the median. IIRC the upper quartile was more like 7%, and some went for 10%.

The people who gave higher figures usually either: (i) had short AI timelines - like you suggest (ii) believe there will be lots of future EA donors - so current donors should give more now and hope future donors can fill in for them.

For the counterargument, I'd suggest our podcast with Phil Trammel and Will on whether we're at the hinge of history. Skepticism about the importance of AI safety and short AI timelines could also be an important part of the case (e.g. see our podcast with Ben Garfinkel).

One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!

The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020)

Thanks for the write up!

I'd also mention 'fit with audience' as an even bigger factor.

Sam's audience are people who are into big technical intellectual topics like philosophy, physics, consciousness; and also their impact on society. They're also up for considering weird or unpopular ideas. And the demographics also seem pretty similar to EA. So it's hard to imagine a following with a better potential fit.

Some thoughts on EA outreach to high schoolers

Ultimately I care about impact, but the engagement measures in the EA survey seem like the best proxy we have within that dataset.

(E.g. there is also donation data but I don't think it's very useful for assessing the potential impact of people who are too young to have donated much yet.)

A better analysis of this question should also look at things like people who made valuable career changes vs. age, which seems more closely related to impact.

Some thoughts on EA outreach to high schoolers

I'm going to leave it to David Moss or Eli to answer questions about the data, since they've been doing the analysis.

Seeking explanations of comparative rankings in 80k priorities list

Hey OmariZi,

Partly the ranking is based on an overall judgement call. We list some of the main inputs into it here.

That said, I think for the 'ratings in a nutshell' section, you need to look at the more quantiative version.

Here's the summary for AI:

Scale: We think work on positively shaping AI has the potential for a very large positive impact, because the risks AI poses are so serious. We estimate that the risk of a severe, even existential catastrophe caused by machine intelligence within the next 100 years is something like 10%.

Neglectedness: The problem of potential damage from AI is somewhat neglected, though it is getting more attention with time. Funding seems to be on the order of 100 million per year. This includes work on both technical and policy approaches to shaping the long-run influence of AI by dedicated organisations and teams.

Solvability: Making progress on positively shaping the development of artificial intelligence seems moderately tractable, though we’re highly uncertain. We expect that doubling the effort on this issue would reduce the most serious risks by around 1%.

Here's the summary for factory farming:

Scale: We think work to reduce the suffering of present and future nonhuman animals has the potential for a large positive impact. We estimate that ending factory farming would increase the expected value of the future by between 0.01% and 0.1%.

Neglectedness: This issue is moderately neglected. Current spending is between $10 million and $100 million per year.

Solvability: Making progress on reducing the suffering of present and future nonhuman animals seems moderately tractable. There are some plausible ways to make progress, though these likely require technological and expert support.

You can see that we rate them similarly for neglectedness and solvability, but think the scale of AI alignment is 100-1000x larger. This is mainly due to the potential of AI to contribute to existential risk, or to other very long-term effects.

Some thoughts on EA outreach to high schoolers

Eli Rose helpfully looked more into the data more carefully, and found a mistake in what I said above. It looks like people who got involved in EA at age ~18 are substantially more engaged than those who got involved at 40. People who got involved at 15-17 are also more engaged than those who got involved at 40. So, this is an update in favour of outreach to young people.

Should 80,000 hours run a tiktok account?

Yeah, I think youtube is higher priority. (And then we can cross-post short video & podcast clips & quotes to instagram as well.)

My current impressions on career choice for longtermists

Hi Michael,

Just some very quick reactions from 80k:

  • I think Holden’s framework is useful and I’m really glad he wrote the post.

  • I agree with Holden about the value of seeking out several different sources of advice using multiple frameworks and I hope 80k’s readers spend time engaging with his aptitude-based framing. I haven’t had a chance to think about exactly how to prioritise it relative to specific pieces of our content.

  • It’s a little hard to say to what extent differences between our advice and Holden’s are concrete disagreements v. different emphases. From our perspective, it’s definitely possible that we have some underlying differences of opinion (e.g. I think all else equal Holden puts more weight on personal fit) but, overall, I agree with the vast majority of what Holden says about what types of talent seem most useful to develop. Holden might have his own take on the extent to which we disagree.

  • The approach we take in the new planning process overlaps a bit more with Holden’s approach than some of our past content does. For example, we encourage people to think about which broad “role” is the best fit for them in the long-term, where that could be something like “communicator”, as well as something narrower like “journalist”, depending on what level of abstraction you find most useful.

  • I think one weakness with 80k’s advice right now is that our “five categories” are too high-level and often get overshadowed by the priority paths. Aptitudes are a different framework from our five categories conceptually, but seem to overlap a fair amount in practice (e.g. government & policy = political & bureaucratic aptitude). However, I like that Holden’s list is more specific (and he has lots of practical advice on how to assess your fit), and I could see us adapting some of this content and integrating it into our advice.

Load More