hb574

Topic Contributions

Comments

How Could AI Governance Go Wrong?

Good post! I'm curious if you have any thoughts on the potential conflicts or contradictions between the "AI ethics" community, which focuses on narrow AI and harms from current AI systems (members of this community include Gebru and Whittaker) and the AI governance community that has sprung out of the AI safety/alignment community (e.g GovAI)? In my view, these two groups are quite opposed in priorities and ways of thinking about AI (take a look at Timnit Gebru's twitter feed for a very stark example) and trying to put them under one banner doesn't really make sense. This contradiction seems to encourage some strange tactics (such as AI governance people proposing different regulations of narrow AI purely to slow down timelines rather than for any of the usual reasons given by the AI ethics community) which could lead to a significant backlash.

Transcripts of interviews with AI researchers

This is great work, I think it's really valuable to get a better sense of what AI researchers think of AI safety.

Often when I ask people in AI safety what they think AI researchers think of AGI and alignment arguments, they don't have a clear idea and just default to some variation on "I'm not sure they've thought about it much". Yet as these transcripts show, many AI researchers are well aware of AI risk arguments (in my anecdotal experience, many have read at least part of Superintelligence ) and have more nuanced views. So I'm worried that AI safety is insular w.r.t mainstream AI researchers thought on AGI - and these are people who in many cases have spent their working life thinking about AGI, so their thoughts are highly valuable, and this work goes some way to reversing that insularity.

A nice followup direction to take this would be to get a list of common arguments used by AI researchers to be less worried about AI safety (or about working on capabilities, which is separate), counterarguments, and possible counter-counter arguments. Do you plan to touch on this kind of thing in your further work with the 86 researchers?

The AI Messiah

Just my anecdotal experience, but when I ask a lot of EAs working in or interested in AGI risk why they think it's a hugely important x-risk, one of the first arguments that comes to people's minds is some variation on "a lot of smart people [working on AGI risk] are very worried about it". My model of many people in EA interested in AI safety is that they use this heuristic as a dominant factor in their reasoning — which is perfectly understandable! After all, formulating a view of the magnitude of risk from transformative AI without relying on any such heuristics is extremely hard. But I think this post is a valuable reminder that it's not particularly good epistemics for lots of people to think like this.

The title of this post is a general claim about the long-term future, and yet nowhere in your post do you mention any x-risks other than AI. Why should we not expect other x-risks to outweigh these AGI considerations, since they may not fit into this framework of extinction, ok outcome, utopian outcome? I am not necessarily convinced that pulling the utopia handle on actions related to AGI (like the four you suggest) have a greater effect on P(utopia) than some set of non-AGI-related interventions.

Replicating and extending the grabby aliens model

Looks like great work! Do you plan to publish this in a similar venue to previous papers on this topic, such as in an astrophysics journal? I would be very happy to see more EA work published in mainstream academic venues.

FLI launches Worldbuilding Contest with $100,000 in prizes

Isn't "Technology is advancing rapidly and AI is transforming the world sector by sector" perfectly consistent with a singularity? Perhaps it would be a rather large understatement, but still basically true.

A case for the effectiveness of protest

There's a lot of good work here and I don't have time to analyse it in detail, but I had a look at some of your estimates, and I think they depend a bit too heavily on subjective guesses about the counterfactual impact of XR to be all that useful. I can imagine that if you vary the parameter for how much XR might have brought forward net zero or the chance that it directly caused net zero pledges to be taken, then you end up with very large bounds on your ultimate effectiveness numbers. Personally, I don't think it's all that reasonable to suggest that, for example, making a net zero pledge one or two years earlier means a corresponding one or two year difference in the time to hit net zero (this seems highly non-linear and there could reasonably be 0 difference). 

In addition, I think you discount "zeitgeist effects" - having XR gain traction at the precise time when many other climate groups and climate awareness in general were also gaining traction means that attributing specific outcomes to XR becomes very difficult, although of course XR is part of said zeitgeist. Therefore it seems possible that you could model SMOs as "riding the wave" of public sentiment - contributing to popular awareness of their cause to some extent, but acting as a manifestation of popular awareness rather than as a cause of it.