C

charlieh943

206 karmaJoined

Comments
20

I’d guess less than 1/4 of the people had engaged w AIS (e.g. read some books/articles). Perhaps 1/5 had heard about EA before. Most were interested in AI though.

Ah nice! I had forgotten about this Anscombe article, which is where this point had come from. Thanks for pointing that out.

Interesting! Makes sense that this is common advise. I’ve heard similar stuff from CBT therapists, as you mention.

That point was fairly anecdotal, and I don’t think contributes too much to the argument in this section. I place more weight on the Stanford article/Chao-Hwei responses.

I don’t think that the quote you mention is exactly what Singer believes. He’s setting up the problem for Chao-Hwei to respond to. His own view is that the view “suffering is bad” is a self-evident perception. Perhaps this is subtly different from Singer disliking suffering, or wanting others to alleviate it. Perhaps self-evident in the same way colour is. I think moral realists lean on this analogy sometimes.

Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important. 

You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/merit – because individuals' actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandel's book, The Tyranny of Merit. Sandel takes issue with the attitude of "winners" within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst "high-impact individuals" .

I'm so sorry it's taken me so long to respond, Mikhail!

<I would like to note that none of that had been met with corporations willing to spend potentially dozens of billions of dollars on lobbying>

I don't think this is true, for GMOs, fossil fuels, or nuclear power. It's important total lobbying capacity/potential, from actual amount spent on lobbying.... Total annual total technology lobbying is in the order hundreds of million: the amount allocated for AI lobbying is, by definition, less. This is a similar to total annual lobbying (or I suspect lower) than than biotechnology spending for GMOs. Annual climate lobbying over £150 million per year as I mentioned in my piece. The stakes are also high for nuclear power. As mentioned in my piece, legislation in Germany to extend plant lifetimes in 2010 offered around €73 billion in extra profits for energy companies, some firms sued for billions of Euros after Germany's reversal. (Though, I couldn't find an exact figure for nuclear lobbying).  

< none of these clearly stand out to policymakers as something uniquely important from the competitiveness perspective > 

I also feel this is too strong. Reagan's national security advisors were reluctant about his arms control efforts in 1980s because of national security concerns. Some politicians in Sweden believed nuclear weapons were uniquely important for national security. If your point is that AI is more strategically important than these other examples, then I would agree with you. Though your phrasing is overly strong. 

< AI is more like railroads > 

I don't know if this is true ... I wonder how strategically important railroads were? I also wonder how profitable they were? Seems to be much more state involvement in railroads versus AI... Though, this could be an interesting case study project!

< AI is more like CFCs in the eyes of policymakers, but for that, you need a clear scientific consensus on the existential threat from AI > 

I agree you need scientific input, but CFCs also saw widespread public mobilisation (as described in the piece). 

< incentivising them to address the public's concerns won’t lead to the change we need >

This seems quite confusing. Surely, this depends on what the public's concerns are? 

< the loudest voices are likely to make claims that the policymakers will know to be incorrect >

This also seems confusing to me. If you believe that policymakers regularly sort the "loudest voices" from real scientists, in general, why do you think that regulations with "substantial net-negative impact" passed wrt GMOs/nuclear? 

< Also, I’m not sure there’s an actual moratorium on GM crops in Europe > 

Yes, with "moratorium" I'm referring to a de-facto moratorium on new approvals of GMOs 1999-2002. In general, though, Europe grows a lot less GMOs than other countries: 0.1 million hectares annually versus >70 million hectares in US. I wasn't aware Europe imports GMOs from abroad. 

 


 

Sorry that this is still confusing. 5-15 is the confidence interval/range for the counterfactual impact of protests,  i.e. p(event occurs with protests) - p(event occurs without protests) = somewhere between 5 and 15.  Rather than p(event occurs with protests) = 5, p(event occurs without protests) = 15, which wouldn't make sense.

I agree restraining AGI requires "saying no" prior to deployment. In this sense, it is more similar to geo-engineering than fossil fuels: there might be no 'fire alarm'/'warning shot' for either. 

Though, the net-present value of AGI (as perceived by AI labs) still seems v high, evidenced by high investment in AGI firms. So, in this sense, it has similar commercial incentives for continued development as continued deployment of GMOs/fossil fuels/nuclear power. I think the GMO example might be the best as it both had strong profit incentives and no 'warning shots'.

Thank you! 

I think your point about hindsight bias is a good one. I think it is true of technological restraint in general: "Often, in cases where a state decided against pursuing a strategically pivotal technology for reasons of risk, or cost, or (moral or risk) concerns, this can be mis-interpreted as a case where the technology probably was never viable." 

I haven't discounted protests which were small – GMO campaigns and SAI advocacy were both small scale. The fact that unsuccessful protests are more prolonged might make them more psychologically available: e.g. Just Stop Oil campaigns. I'm slightly unsure what your point is here?

I also agree that other examples of restraint are also relevant – particularly if public pressure was involved (like for Operation Popeye, and Boeing 2707).

Hi Vaipan, I appreciate that! 

I agree that political climate is definitely important. The presence of elite allies (Swedish Democrats, President Nazarbayev), and their responsiveness to changes in public opinion was likely important. I am confident the same is true for GM protests in 1990s in Europe: decision-making was made by national governments, (who were more responsive to public perceptions than FDA in USA), and there were sympathetic Green Parties in coalition governments in France/Germany. 

I agree that understanding these political dynamics for AI is vitally important – and I try to do so in the GM piece. One key reason to be pessimistic about AI protests is that there aren't many elite political allies for a pause. I think the most plausible TOCs for AI protests, for now, is about raising public awareness/shifting the Overton Window/etc., rather than actually achieving a pause.

Load more