Outreach and Community Building Associate @ Legal Priorities Project
Pursuing a graduate degree (e.g. Master's)
Working (0-5 years experience)
824Arlington, VA, USAJoined Sep 2018


Master in Public Policy student at Georgetown University. Previously worked in operations at Rethink Charity et. al. and co-founded EA Anywhere. 


Topic Contributions

Update: We've extended the deadline to apply for LPSI to June 24. If you think you might be a good fit, we'd love to see your application!

Very valid! I guess I'm thinking of this as "approaches EA values" [verb] rather than "values" [noun]. I think most if not all of the most abstract values EA holds are still in place, but the distinction between core and secondary values is important.

I made this so I could easily link all the posts on this but then I realized they're almost all here: Feel free to delete!

99% Invisible had a podcast on this that I found really interesting. The scale of the problem must have gone completely over my head. Great write-up!

That makes sense though I feel like this still applies. It's still not great optics to pay lots of money to people working on global poverty, but it's far from unheard of and, if there's concrete evidence that those people are having an impact then I think a lot of people would consider it justified.

I think the reason it's acceptable for AI researchers to bring in large sums of money is more because of the market rate for their skillset and less because of the cause directly. I think if someone were paid a high salary to build complex software that solved poverty (if such a thing existed) I would guess that that would be viewed roughly equally. On the other hand if you pay longtermist and/or global poverty community-builders lots of money, this looks much worse.

Maybe I'm misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).

But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don't think we can realistically make traction on existential risk because they haven't seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity - because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won't assume that the longtermist approach is the only one that'll actually work.

Is the source of this graphic public? This affected my perspective a lot and it'd be great to have a clean copy. :) 

This is fantastic, thanks for writing this up! I've been hearing a lot about federal consulting (it seems to be one of the most common careers people pursue after an MPP) so it's helpful to see an analysis from an EA perspective. :) 

Load More