This is a special post for quick takes by Aryeh Englander. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 11:26 PM

Thought: In what ways do EA orgs / funds go about things differently than in the rest of the non-profit (or even for-profit) world? If they do things differently: Why? How much has that been analyzed? How much have they looked into the literature / existing alternative approaches / talked to domain experts?

Naively, if the the thing they do differently is not related to the core differences between EA / that org and the rest of the world, then I'd expect that this is kind of like trying to re-invent the wheel and it won't be a good use of resources unless you have a good reason to think you can do better.

Here's a perspective I mentioned recently to someone:

Many people in EA seem to think that very few people outside the "self identifies as an EA" crowd really care about EA concerns. Similarly, many people seem to think that very few researchers outside of a handful of EA-affiliated AI safety researchers really care about existential risks from AI.

Whereas my perspective tends to be that the basic claims of EA are actually pretty uncontroversial. I've mentioned some of the basic ideas many times to people and I remember getting pushback I think only once - and that was from a self-professed Kantian who already knew about EA and rejected it because they associated it with Utilitarianism. Similarly, I've presented some of the basic ideas behind AI risk many times to engineers and I've only very rarely gotten any pushback. Mostly people totally agree that it's an important set of issues to work on, but there are also other issues we need to focus on (maybe even to a greater degree), they can't work on it themselves because they have a regular job, etc. Moreover, I'm pretty sure that for a lot of such people, if you compensate them sufficiently and remove the barriers that are preventing them from e.g., working on AGI safety, then they'd be highly motivated to work on it. I mean, sure, if I can get paid my regular salary or even more and I can also maybe help save the world, then that's fantastic!

I'm not saying that it's always worth removing all those barriers. In many cases it may be better to hire someone who is so motivated to do the job that they'd be willing to sacrifice for it. But in other cases it might be worth considering whether someone who isn't "part of EA" might totally agree that EA is great, and all you have to do is remove the barriers for that person (financial / career / reputational / etc.) and then they could make some really great contributions to the causes that EA cares about.

Questions:

  1. Am I correct that the perspective I described in my first paragraph is common in EA?
  2. Do you agree with the perspective I'm suggesting?
  3. What caveats and nuances am I missing or glossing over?

[Note: This is a bit long for a shortform. I'm still thinking about this - I may move to a regular post once I've thought about it a bit more and maybe gotten some feedback from others.]

Curated and popular this week
Relevant opportunities