February 2 - 8
The Scaling Series

Read Toby Ord's series here, and discuss it here, all week. 

Read Toby Ord's series here, and discuss it here, all week. 

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
A bit sad to find out that Open Philanthropy’s (now Coefficient Giving) GCR Cause Prioritization team is no more.  I heard it was removed/restructured mid-2025. Seems like most of the people were distributed to other parts of the org. I imagine there must have been a bunch of other major changes around Coefficient that aren't yet well understood externally. This caught me a bit off guard.  There don't seem to be many active online artifacts about this team, but I found this hiring post from early last year, and this previous AMA. 
EA Animal Welfare Fund almost as big as Coefficient Giving FAW now? This job ad says they raised >$10M in 2025 and are targeting $20M in 2026. CG's public Farmed Animal Welfare 2025 grants are ~$35M.   Is this right? Cool to see the fund grow so much either way.
Lots of “entry-level” jobs require applicants to have significant prior experience. This seems like a catch-22: if entry-level positions require experience, how are you supposed to get the experience in the first place? Needless to say, this can be frustrating. But we don’t think this is (quite) as paradoxical as it sounds, for two main reasons.  1: Listed requirements usually aren't as rigid as they seem. Employers usually expect that candidates won’t meet all of the “essential” criteria. These are often more of a wish list than an exhaustive list of strict requirements. Because of this, you shouldn’t necessarily count yourself out because you fall a little short on the listed experience requirements. Orgs within EA are much better at communicating this explicitly, but it should be taken as a rule of thumb outside of EA as well. You should still think strategically about which roles you apply for, but this is something to factor in. 2: You can develop experience outside of conventional jobs. For a hiring manager, length of experience is a useful heuristic. It tells them you’ve probably picked up the skills needed for the role. But if you can show that you have these skills through other means, the exact amount of experience you have becomes far less important. A few of the best ways to do this: * Internships and fellowships. These are designed for people entering new fields and signal to employers that someone has already vetted you. They’re often competitive, but usually don’t require previous experience. * Volunteering. Organizations usually have lower bars for volunteers than paid positions, making this a more accessible option (usually). Look for advertised volunteering opportunities at orgs you’re interested in, or reach out to them directly. * Independent projects. Use your spare time to make something tangible you can show potential employers, like an app, portfolio, research paper, blog, or running an event. Obviously the most useful projects will v
@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We'd love to hear topics that you'd like us to discuss. (The questions people proposed last time are here, for reference.) We're most likely to discuss issues related to AI, but a broad set of topics other than "preventing AI takeover" are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.
Thanks to everyone who voted for our next debate week topic! Final votes were locked in at 9am this morning.  We can’t announce a winner immediately, because the highest karma topic (and perhaps some of the others) touches on issues related to our politics on the EA Forum policy. Once we’ve clarified which topics we would be able to run, we’ll be able to announce a winner.  Once we have, I’ll work on honing the exact wording. I’ll write a post with a few options, so that you can have input into the exact version we end up discussing.  PS: Apologies for the delay here — in retrospect, I should have checked on adherence to our policy before allowing voting. In the now very likely event that we cannot have the highest karma discussion on the EA Forum, I’d remind you that this is not the only place for EA-related discussions on the internet — Substack and Twitter do not have our politics policy.