We offer scholarship support to master's students via our early-career funding program.
We leave it up to our scholars to decide where to apply, but in our experience they're generally aware that applying to Harvard, MIT, etc. is a good idea.
We're open to offering support for admissions counseling to some applicants who don't quite meet the bar for a tuition-and-fees scholarship. (All applicants who do meet the bar are offered counseling.)
We produced our own ranking of US universities based on factors like school selectivity, academic quality of the student body, and revealed preference data (such as cross-admit yields), plus some other "soft" factors. We sanity-checked this against existing school rankings and anecdotal impressions of people who have a lot of context on the US admissions process. We primarily used national rankings to determine the UK universities in our list, along with some anecdotal evidence about Imperial being the next-best choice (after Oxbridge) for a lot of STEM students.
(I also work at Open Phil and am involved in running this program.)
We just posted an announcement here that we’ll be running the program again this year.
Hmm, I’m skeptical of this model. It seems like it would be increasingly difficult to achieve a constant 1.1x multiplier as you add more people.
For example, it would be much harder for Apple's 300th employee to increase their share price by 10% compared to their 5th employee.
I think overconfident and underconfident aren't crisp terms to describe this. With binary outcomes, you can invert the prediction and it means the same thing (20% chance of X == 80% chance of not X). So being below the calibration line in the 90% bucket and above the line in the 10% bucket are functionally the same thing.
I’m not affiliated with Canopy Retreats but I agree that’d be useful
EA Retreats recently rebranded as Canopy Retreats (canopyretreats.org).
The new effectivealtruism.org homepage looks fantastic.
Alice, a highly experienced ML researcher, thinks crunch time for AI will come in 20-30 years. She spends quite a bit of her time community-building for AI safety, i.e. maximizing her impact if crunch time is in 20-30 years rather than if it is now.
Bob, a newer researcher with less skills, thinks we’re in crunch time now. He might try to take a role at a current AI org that maximizes his current impact but isn’t spectacular for developing career capital.
It seems like if Alice and Bob could coordinate properly, Alice would operate under Bob’s timelines, Bob under Alice’s, and both would be better off.
Has anyone written more about this idea?