Bella

Director of Growth @ 80,000 Hours
2224 karmaJoined Working (0-5 years)Bethnal Green, London, UK

Bio

Hello, my name's Bella Forristal. I work at 80,000 Hours, as the director of growth. 

I'm interested in AI safety, animal advocacy, and ethics / metaethics. 

Previously, I worked in community building with the Global Challenges Project and EA Oxford, and have interned at Charity Entrepreneurship. 

Please feel free to email me to connect at bellaforristal@gmail.com, or leave anonymous feedback at https://www.admonymous.co/bellaforristal :)

Comments
154

I strongly agree with this part:

[T]he specifics of factory farming feel particularly clarifying here. Even strong-identity vegans push the horrors of factory farming out of their heads most of the time for lack of ability to bear it. It strikes me as good epistemic practice for someone claiming that their project most helps the world to periodically stare these real-and-certain horrors in the face and explain why their project matters more – I suspect it cuts away a lot of the more speculative arguments and clarifies various fuzzy assumptions underlying AI safety work to have to weigh it up against something so visceral. It also forces you to be less ambiguous about how your AI project cashes out in reduced existential risk or something equivalently important.

I think it's quite hard to watch slaughterhouse footage and then feel happy doing something where you haven't, like, tried hard to make sure it's among the most morally important things you could be doing.

I'm not saying everyone should have to do this — vegan circles have litigated this debate a billion times — but if you feel like you might be in the position Matt describes, watch Earthlings or Dominion or Land of Hope and Glory.

I think this is just Matt's style (I like it, but it might not be everyone's taste!). I think the SummaryBot comment does a pretty great job here, so maybe read that if you'd like to get the TL;DR of the post.

More anonymous questions!

How much weight is given to location? It seems that UK/US-based organisations within EA often claim to be open to remote candidates around the world but seldom actually make offers to these candidates (at least from what I’ve seen/heard over the years)

I think I'd give quite a bit of weight against a candidate if they never had the ability to visit the office. But I think if someone lived overseas but e.g. could spend a couple of weeks here every 3-6 months, it's not a big downside.

I'm not sure which organisations specifically you're talking about, but speaking about 80k here:

  • Until 2023, our policy was that "primary staff" hires must be in-person. Then we changed it to only managers/team leads needed to be in person, and then we later dropped that too — so we're relatively new to being fully open to remote staff.
  • That said, a lot of our staff are remote.
  • Scanning through our org chart, 13 primary staff are "fully remote", and a further 3 are "mostly remote" (visit the office 1-2 days a week). That's out of 32 total primary staff.
  • So, my overall impression is 80k is "genuinely open" to remote staff :)

If a remote candidate did make it to the trial round, would it be a remote or in-person trial?

In-person. We can pay for (and book, if you like) flights and accommodation. We unfortunately can't pay for your time, unless you have the right to work in the UK (but if you do, we'll pay for your time as well!)

How much quantitative work is involved in this role – e.g. calculating cost-effectiveness, etc?

A fair amount!

I'd say the person in this role needs to have the quantitative skills to answer moderately complex data-related questions, but they do not need to have a quantitative degree (though that could be helpful). I think "was reasonably good at high school maths," plus the willingness to learn a few key concepts (such as cost-effectiveness, and diminishing marginal returns) would be sufficient :)

The application form contains a quantitative question for this reason. I think if you get this question right without too much trouble, you'll be fine :)

I agree with the substance but not the valence of this post.

I think it's true that EAs have made many mistakes, including me, some of which I've discussed with you :)

But I think that this post is an example of "counting down" when we should also remember the frame of "counting up."

That is — EAs are doing badly in the areas you mentioned because humans are very bad at the areas you mentioned. I don't know of any group where they have actually-correct incentives, reliably drive after truth, get big, complicated, messy questions like cross-cause prioritisation right — like, holy heck, that is a tall order!!

So you're right in substance, but I think your post has a valence of "EAs should feel embarrassed by their failures on this front", which I strongly disagree with. I think EAs should feel damn proud that they're trying.

Bella
38
16
1
4

Strongly agree with this well-articulated point.

Sometimes friends ask me why I work so hard, and I don't know how to get them to understand that it's because I believe that it matters — and the fact that they don't believe that about their work is maybe a sign they should do something else.

I got another anonymous question! :)

In the post about 80K’s pivot to AGI, you discuss active headhunting for specific roles relevant to AGI. To what extent do you expect a candidate in this role (and 80K’s outreach more broadly) to focus on your historic audience (ambitious, altruistic young people) vs active outreach to those with relevant skills for making AGI go well (e.g. ML professionals, lawyers)?

The kind-of-annoying but true answer is "some of both!"

I expect that a reasonably high proportion of our new outreach efforts will be focused on trying to find people who are particularly well-suited to contributing to making AGI go well. But:

  • I think we'll continue with a lot of the kinds of outreach that's worked well for us in the past (since we can continue to execute on it efficiently)
  • I think we should still take the lowest-hanging fruit of outreach to our historical audiences

I also put quite a lot of weight on the argument that 80k as a product has been historically really valuable to a certain kind of person; we have hypotheses about how / why, but ultimately, making big changes we should expect to see some regression to the mean. So I'm keen for us to not entirely stop using our previous strategy.

But if e.g. the website changes so much that it doesn't make sense to reach people without a prior interest in AI, then that might change (tho, FWIW, I think this is pretty unlikely, at least in the near future / without the web team's views changing).

I got the following anonymous question:

Heya Bella! When is the preferred start date for engagement specialist role? And, how late a start would you be willing to accept?

The preferred start date is basically as soon as possible after we conclude the evaluation process!

But, we understand folks will have notice periods, and other obligations that might mean they need to wait a while.

I think needing to wait e.g. several months is a (significant-ish) downside, but we'd be willing to do so for the right applicant!

Ah — thanks so much David for adding the more recent link!! I'll add that into the job ad on our site too :)

Bella
22
13
0
2

I loved your telling of de Sousa Mendes' story — thanks for sharing it. The moral courage he showed is really beautiful to me :)

Just speaking for myself — I'm not a college student, but I'm totally happy to get meeting requests where the only point is to hang out / meet the person! Sometimes these kinds of meetings are awesome :) But I'd prefer the person to send a connection request saying that rather than not have any message attached.

Load more