D

davekasten

64 karmaJoined

Comments
3

I think if your approach is causing you to think that Tristan is "threatening not to care" about AI risk, then you're really missing the mark, Holly. 

Tristan demonstrably has made pretty big personal sacrifices to work on AI, literally worked with Felix on your team on an AI Safety Camp project about arguing that grassroots Congressional outreach is good (I was also working on that team), and is continuing to look for opportunities to work on AI risk reduction during and after grad school.

Tristan is, in short, the kind of person that if you were looking to hire another person in DC, I'd be recommending to you to consider.  He very much is aligned with your core strategy!  If I had to guess, I'd guess that he considers himself to be a supporter of PauseAI US's approach!

Given how you're engaging on this thread, I'll bet that you'll reply to this post by saying something like, "see, his response this proves how pernicious EA culture is, that it can corrupt even people who should be on board."  I would politely ask you to consider the possibility instead that, at least sometimes, you're shooting at the wrong targets.
 

I think the average person who is likely to come across this would be benefitted by having the "influence" and "participate" content come more quickly, perhaps at the cost of the "learn" section being shorter.

People often feel a lack of agency when they learn about AI risk.  Giving them agency back can be a really good thing. 

Note that you should also understand a) how the US government sees China and why, b) how China sees the US and why in order to be able to have a full analysis here.