962Joined Sep 2014


I suspect that it wouldn't be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1/12 of Google's US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target. 

Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they've produced. I think the whole domain of data as a potential lever for AI governance is worthy of more attention. Would be keen to see someone delve into it. 

I like the thought that CA regulating AI might be seen as a particularly credible signal that AI regulation makes sense and that it might therefore be more likely to produce a de jure effect. I don't know how seriously to take this mechanism though. E.g. to what extent is it overshadowed by CA being heavily Democrat. The most promising way to figure this out in more detail to me seems like talking to other state legislators and looking at the extent to which previous CA AI-relevant regulation or policy narratives has seen any diffusion. Data privacy and facial recognition stand out as most promising to look into, but maybe there's also stuff wrt autonomous vehicles. 


That sounds like really interesting work. Would love to learn more about it. 

"but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California." Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to systems being produced in CA? Or that CA regulation is particularly likely to affect the norms of frontier AI companies because they're more likely to be aware of the regulation? 

We've already started to do more of this. Since May, we've responded to 3 RFIs and similar (you can find them here: the NIST AI Risk Management Framework; the US National AI Research Resource interim report; and the UK Compute Review. We're likely to respond to the AI regulation policy paper. Though we've already provided input to this process via Jonas Schuett and I being on-loan to the Brexit Opportunities Unit to think about these topics for a few months this spring. 

I think we'll struggle to build expertise in all of these areas, but we're likely to add more of it over time and build networks that allow us to input in these other areas should we find doing so promising. 

"I'd suggest being discerning with this list"

Definitely agree with this! 

One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: ). 

Really excited to see this! 

I noticed the survey featured the MIRI logo fairly prominently. Is there a way to tell whether that caused some self-selection bias? 

In the post, you say "Zhang et al ran a followup survey in 2019 (published in 2022)1 however they reworded or altered many questions, including the definitions of HLMI, so much of their data is not directly comparable to that of the 2016 or 2022 surveys, especially in light of large potential for framing effects observed." Just to make sure you haven't missed this: we had the 2016 respondents who also responded to the 2019 survey receive the exact same question they were asked in 2016, including re HLMI and milestones. (I was part of the Zhang et al team)

Hi Lexley, Good question. Kirsten's suggestions are all great. To that, I'd add: 

  • Try to work as a research assistant to someone who you think is doing interesting work. Quite often, more so than other roles, RA roles are quite often not advertised and set up on a more ad hoc basis. Perhaps the best route in is to read someone's work and 
  • Another thing you could do is to try to take a stab independently on some important-seeming question. You could e.g. pick a research question hinted at in a paper/piece (some have a section specifically with suggestions for further work), mentioned in a research agenda (e.g. Dafoe 2018), or in lists of research ideas (GovAI collated one here and Michael Aird, I think, sporadically updates this collection of lists of EA-relevant research questions).
  • My impression is that you can join the AGI Safety Fundamentals as an undergrad.
  • You could also look into the various "ERIs": SERI, CHERI, CERI, and so on. 

As for GovAI, we have in the past engaged undergrads as research assistants and I could imagine us taking on particularly promising undergrads for the GovAI Fellowship. However, overall, I expect our comparative advantage will be working with folks who either have significant context on AI governance or who have relevant experience from some other domain. It may also lay in producing writing that can help people navigate the field. 

Thanks Jeffrey! I hope we're a community where it doesn't matter so much whether you think we suck. If you think the EA community should engage more with nuclear security issues and should do so in different ways, I'm sure people would love to hear it. I would! Especially if you'd help answer questions like: How much can work on nuclear security reduce existential risk? What kind of nuclear security work is most important from an x-risk perspective?

I'd love to hear more about what your concerns and criticisms are. For example, I'd love to know: Is the Scoblic post the main thing that's informing your impression? Do you have views on this set of posts about the severity of a US-Russia nuclear exchange from Luisa Rodriguez ( Is there effective altruist funding or activity in the nuclear security space that you think has been misguided?

Load More