HowieL

1711Joined Dec 2015

Bio

I'm CEO at 80,000 Hours. Before that, I was in various roles at 80k, most recently Chief of Staff.

I was also the initial program officer for global catastrophic risk at Open Philanthropy. Comments here are my own views only, not my present or past employers', unless otherwise specified.

Comments
182

My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.

It's been a while since I read it but Joe Carlsmith's series on expected utility might help some. 

[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.

I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.

Others, most of which I haven't fully read and not always fully on topic:

Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook). 

I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.)

Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).

I like the way some of Joe Carlsmith's essays touch on this. 

FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction.


Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.

 

  1. ^

    Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’

  2. ^

    That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’

Thanks for this! It was really useful and will save 80,000 Hours a lot of time.

Load More