HowieL

1557Joined Dec 2015

Bio

I'm CEO at 80,000 Hours. Before that, I was in various roles at 80k, most recently Chief of Staff.

I was also the initial program officer for global catastrophic risk at Open Philanthropy. Comments here are my own views only, not my present or past employers', unless otherwise specified.

Comments
163

Interesting vs. Important Work - A Place EA is Prioritizing Poorly

I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don't currently seem to be overprioritized. I don't think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I'd guess less than 1 FTE on infinite ethics. And not a ton on rationality, either. 

Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER's work seems less theoretical. But you might still think there's too much overall?

My impression is that there's much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn't the kind of thing you're talking about though.

There's a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to "meta" work.

Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.

My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.

Leaning into EA Disillusionment

Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs.

Also, I think even people like this who haven't gone through the disillusionment pipeline are often a lot more uncertain about many (though not all) things than most newcomers would guess. 

Leaning into EA Disillusionment

Thanks for writing this post. I think it improved my understanding of this phenomenon and I've recommended reading it to others.

Hopefully this doesn't feel nitpicky but if you'd be up for sharing, I'd be pretty interested in roughly how many people you're thinking of:

"I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, they throw themselves into EA, invest years of their life and tons of their energy into the movement, but gradually become disillusioned and then fade away without having the energy or motivation to articulate why."

I'm just wondering whether I should update toward this being much more prevalent than I already thought it was.

On Deference and Yudkowsky's AI Risk Estimates

"My best guess is that I don't think we would have a strong connection to Hanson without Eliezer"

Fwiw, I found Eliezer through Robin Hanson.

Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins

Agree they have a bunch of very obnoxious business practices. Just fyi you can change a seeing so nobody can see whose pages you look at.

Idea: Pay experts to provide detailed critiques / analyses of EA ideas

I think Open Philanthropy has done some of this. For example:

The Open Philanthropy technical reports I've relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of experts or literature.)

https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand

Response to Recent Criticisms of Longtermism

Was this in the deleted tweet? The tweet I see is just him tagging someone with an exclamation point. I don't really think it would be accurate to characterise that as "Torres supports the 'voluntary human extinction' movement"

Deferring

Yeah that does sell me a bit more on delegating choice.

Deferring

I think that's an improvement though "delegating" sounds a bit formal and it's usually the authority doing the delegating. Would "deferring on views" vs "deferring on decisions" get what you want?

Load More