Aidan O'Gara

Aidan O'Gara's Posts

Sorted by New

Aidan O'Gara's Comments

New data suggests the ‘leaders’’ priorities represent the core of the community

I really love the new section in Key Ideas, "Other paths that may turn out to be very promising". I've been concerned that 80K messaging is too narrow and focuses too much on just your priority paths, almost to the point of denigrating other EA careers. I think this section does a great job contextualizing your recommendations and encouraging more diverse exploration, so thanks!

Somewhat related: I'm guessing 80K focuses on a narrower range of priority paths partially because, as an organization, specializing is valuable for its own sake. If there's a dozen equally impactful areas you could work in, you won't put equal effort into each area - you're better off picking a few and specializing, even if the decision is arbitrary, so you can reap returns to scale within the industry by learning, making connections, hiring specialists, and building other field-specific ability.

If you actually think about things this way, I would suggest saying so more explicitly in your Key Ideas, because I didn't realize it for a long time and it really changes how I think about your recommendations.

(Unrelated to this post, but hopefully helpful)

CEA's Plans for 2020

Why are you moving to Oxford?

If you value future people, why do you consider near term effects?

Provably successful near-term work could drive the growth of the EA movement, benefitting the long term. I’d guess that more people join EA because of GiveWell and AMF than because of AI Safety and biorisk. That’s because (a) near-term work is more popular in the mainstream, and (b) near-term work can better prove success. More obvious successes will probably drive more EA growth. On the other hand, if EA makes a big bet on AI Safety and 30 years from now we’re no closer to AGI or seeing the effects of AI risks, the EA movement could sputter. It’s hard to imagine demonstrably failing like that in near-term work. Maybe the best gift we can give the future isn’t direct work on longtermism, but is rather enabling the EA movement of the future.

I’m not actually sure I buy this argument. If we’re at the Hinge of History and we have more leverage over the expected value of the future than anyone in the future will, maybe some longtermist direct work now is more important than enabling more longtermist direct work in the future. Also, maybe EA’s best sales pitch is that we don’t do sales pitches, we follow the evidence even to less popular conclusions like longtermism.

If you value future people, why do you consider near term effects?

If it’s extremely difficult to figure out the direct effects of near-term interventions, then maybe it’s proportionally harder to figure out long term effects - even to the point of complex cluelessness becoming de facto simple cluelessness.

Some people argue from a “skeptical prior”: simply put, most efforts to do good fail. The international development community certainly seems like a “broad coalition of trustworthy people”, but their best guesses are almost useless without hard evidence.

If you’re GiveWell-level pessimistic about charities having their intended impact even with real time monitoring and evaluation of measurable impacts, you might be utterly and simply clueless about all long term effects. In that case, long term EV is symmetrical and short term effects dominate.

What posts do you want someone to write?

Makes a lot of sense, I'm sure Vox and the New York Times are interested in very different kinds of submissions, writing with a particular style in mind probably dramatically increases the odds of publication.

I still wonder what the success rate here is - closer to 1% or to 10%? If the latter, I could see this being pretty impactful and possibly scalable.

What posts do you want someone to write?

Similarly, an AMA from someone working at an EA org who otherwise isn’t personally very engaged with EA. Maybe they really disagree with EA, or more likely, they’re new to EA ideas and haven’t identified with EA in the past.

They’ll be deeply engaged on the substantive issues but will bring different identities and biases, maybe offering important new perspectives.

What posts do you want someone to write?

That’s a super cool idea.

  • What writing currently exists like this? Vox’s Future Perfect, maybe a few one-off articles in other major publications?
  • Where’s best to publish this? Feels like a lot of work for a blogpost, but I doubt the NYT is looking for unsolicited submissions - are there publishing platforms that would be interested in this?
Ben Cottier's Shortform

I'd love to see this post and generally more discussion of what kinds of x-risks and s-risks matter most. 80K's views seem predicated on deeply held, nuanced, and perhaps unconventional views of longtermism, and it can be hard to learn all the context to catch up on those discussions.

One distinction I like is OpenPhil talking about Level 1 and Level 2 GCRs: https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks

Quantifying lives saved by individual actions against COVID-19

Yeah I’d love to see it copied over here, looks like an interesting analysis

Load More