HowieL

I'm the chief of staff and also work on strategy at 80,000 Hours. Before that, I was the initial program officer for global catastrophic risk at Open Philanthropy. Comments here are my own views only, not my present or past employers', unless otherwise specified.

Posts

Sorted by New

Wiki Contributions

Comments

Some longtermist fiction

+1 to not reading Consider Phlebas. I've been reading it because I wanted to check out the Culture series and I was compulsive about starting with the first one even though I heard others were better.

I haven't gotten much out of it and think it was a mistake. 

Open Thread: July 2021

Welcome! Glad you found us.

RyanCarey's Shortform

Yep - agree with all that, especially that it would be cool for somebody to look into the general question.

RyanCarey's Shortform

My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.

Intervention options for improving the EA-aligned research pipeline

Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.

Intervention options for improving the EA-aligned research pipeline

I don't think Alan's really an example of this.

 

I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It shows a dragonfly and then, I don’t know, a primate, and then a human, and then all humans.

Now, that correspondence is hugely problematic. There’s lots we could say about why that’s not a sensible thing to do, but what I think it did communicate was that the likely extrapolation of trends are such that you are going to have very powerful computers within a hundred years. Who knows exactly what that means and whether, in what sense, it’s human level or whatnot, but the fact that this trend is coming on the timescale it was was very compelling to me. But at the time, I thought Kurzweil’s projection of the social dynamics of how extremely advanced AI would play out unlikely. It’s very optimistic and utopian. I actually looked for a way to study this all through my undergrad. I took courses. I taught courses on technology and society, and I thought about going into science writing.

And I started a PhD program in science and technology studies at Cornell University, which sounded vague and general enough that I could study AI and humanity, but it turns out science and technology studies, especially at Cornell, means more a social constructivist approach to science and technology.

. . . 

Okay. Anyhow, I went into political science because … Actually, I initially wanted to study AI in something, and I was going to look at labor implications of AI. Then, I became distracted as it were by a great power politics and great power peace and war. It touched on the existential risk dimensions that I didn’t have the word for it, but was sort of a driving interest of mine. It’s strategic, which is interesting. Anyhow, that’s what I did my PhD on, and topics related to that, and then my early career at Yale.

I should say during all this time, I was still fascinated by AI. At social events or having a chat with a friend, I would often turn to AI and the future of humanity and often conclude a conversation by saying, “But don’t worry, we still have time because machines are still worse than humans at Go.” Right? Here is a game that’s well defined. It’s perfect information, two players, zero-sum. The fact that a machine can’t beat us at Go means we have some time before they’re writing better poems than us, before they’re making better investments than us, before they’re leading countries.

Well, in 2016, DeepMind revealed AlphaGo, and it was almost this canary in the coal mine, that Go was to me, that was sort of deep in my subconscious keeled over and died. That sort of activated me. I realized that for a long time, I’d said post tenure I would start working on AI. Then, with that, I realized that we couldn’t wait. I actually reached out to Nick Bostrom at the Future of Humanity Institute and began conversations and collaboration with them. It’s been exciting and lots of work to do that we’ve been busy with ever since.

https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Fwiw, for mental health I'm not sure whether therapy is more likely to treat the 'root causes' than medications. You could have a model where some 'chemical thingie' that can be treated by meds is the root cause of mental illness and the actual cognitive thoughts treated by therapy are the symptoms. 

In reality, I'm not sure the distinction is even meaningful given all the feedback loops involved. 

How well did EA-funded biorisk organisations do on Covid?

I don't think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it's not about just semantics, but precision on which efforts did well or poorly.

I think it actually is common to include prevention under the umbrella of pandemic preparedness. for example, here's the Council on Foreign Relation's independent committee on Improving Pandemic Preparedness: "Based on the painful lessons of the current pandemic, the Task Force makes recommendations for improving U.S. and global capacities to deliver each of the three fundamentals of pandemic preparedness: prevention, detection, and response. " Another example: https://www.path.org/articles/building-epidemic-preparedness-worldwide/

So it might be helpful to specify what you're referring to by preparation.

How well did EA-funded biorisk organisations do on Covid?

I think research into novel vaccine platforms like mRNA is a top priority. It's neglected in the sense that way more resources should be going into it but also my impression[1] is that the USG does make up a decent proportion of funding for early stage research into that kind of thing. So that's a sense in which the U.S.'s preparedness was prob good relative to other countries though not in an absolute sense.

Here's an article I skimmed about the importance of govt (mostly NIH) funding for the development of mRNA vaccines. https://www.scientificamerican.com/article/for-billion-dollar-covid-vaccines-basic-government-funded-science-laid-the-groundwork/

Fwiw, I think it's prob not the case that the mRNA stuff was that much of a surprise. This 2018 CHS report had self-amplifying mRNA vaccines as one of ~15 technologies to address GCBRs. https://jhsphcenterforhealthsecurity.s3.amazonaws.com/181009-gcbr-tech-report.pdf

 

[1] Though I'm rusty since I haven't worked directly on biorisk for five years and was never an expert.

Load More