Jamie is the Courses Project Lead at the Centre for Effective Altruism, leading a team running online programmes that inspire and empower talented people to explore the best ways that they can help others. These courses and fellowships provide structured guidance, information, and support to help people take tailored next steps that set them up for high impact.
He has very light-touch involvement as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
Lastly, Jamie is President of the board at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, as co-founder and researcher at Animal Advocacy Careers (which helps people to maximise their positive impact for animals), and as a Program Associate at Macroscopic Ventures (grantmaking focused on s-risks).
I think this is pretty cool. Good to see some relevant benchmarks collected in the same place, and I can see how this is handy as a communication tool.
From a quick skim I wasn't really sure how to interpret the main graph, and there didn't seem to be an explanation. In particular, the Y axis is a percentage, but a percentage of what? Some of the benchmarks are projected to reach 100% in a year, does that mean you project AI takeover in a year etc?
(Sharing less as 'please answer my question' and more as 'user feedback' -- if I'm confused by this, I imagine lots of people who know (even) less than me about AI (safety) will also be confused; though maybe they're not your target audience)
Interesting question. Sorry not to answer directly, but some questions that would help clarify your question:
What is "the geopolitical situation of the world", and how does one improve it? Why is that plausibly one of the top most useful/cost-effective things one can do with their time/money?
(And who is this EA of which you speak? I haven't met them. EA is a community and a question, not a single hierarchical organisation. Although if you think there is something that is important and neglected, the Forum is a handy platform to make the case!)
This is a very cool opportunity! And a nice, clear, writeup.
I want to add that I think there's a much more "maximal" version that could be a great option, too; working full-time on trying to make this super impactful. I agree that some of the opportunities you mention seem very promising and it is likely worth someone really investing in trying to take advantage of them!
See also this "playbook for new groups".
I just tried it out and had a similar impression. I think this is a cool idea and am excited Ozzie created it, but suspect it needs active development (and/or improvements in the underlying models over time) before it's useful to me, due to similarish issues to what you found. I'll likely try a couple more times with drafts though!
Thanks so much to everyone who took the time to play through this and provide such thoughtful feedback! I really appreciate it, and apologies for the delay in implementing these changes.
Here's what I've updated based on your suggestions:
Bug Fixes:
UX Improvements:
Accuracy: Pasture-raised labeling (@david_reinstein): Added "Certified Humane" specification throughout and included a disclaimer that "pasture-raised" isn't USDA-regulated for eggs, so uncertified products may vary significantly
I do agree with several of you (Sanjay, Ben) that full gamification would make it a lot better. This just seems like a change too far for my meagre vibe-coding capabilities and limited time availability for a spare time/fun project. If someone wanted to take it this on and run with it further and make it actually good, I'd be excited about that though! I'd be happy to hand over code etc.
The updated version is live at the same link. Thanks again for helping make this better! I also just posted on LinkedIn if anyone wants to share etc from there.
I think this is an interesting question! I think you're right to point out some of the factors that influence it including cause area, role type (and payment norms for them). I also think organizational cultural norms affect it quite heavily.
My guess is that if you had a large enough dataset and controlled for enough factors, salary would predict 'role leverage' quite well. But I don't expect it to be very useful when choosing between roles to apply for, because the correlation will be weak, your dataset is too small etc. Basically, there are too many predictors and too much noise for it to be very informative. I think you're better off just reading the descriptions or using other heuristics like cause area, job title etc if you're trying to filter quickly.