NO

Naoya Okamoto

8 karmaJoined Aug 2022

Comments
3

Zach writes in an email: “Much/most of my concern about China isn't China has worse values than US or even Chinese labs are less safe than Western labs but rather it's better for leading labs to be friendly with each other (mostly to better coordinate and avoid racing near the end), so (a) it's better for there to be fewer leading labs and (b) given that there will be Western leading labs it's better for all leading labs to be in the West, and ideally in the US […]

I'm curious why Zach thinks that it would be ideal for leading AI labs to be in the US. I tried to consider this from the lens of regulation. I haven't read extensively on comparisons of what regulations there are for AI in various countries, but my impression is that the US federal government is sitting on their laurels with respect to regulation of AI, although state and municipal governments provide a somewhat different picture, and whilst the intentions of each are different, the EU and the UK have been moving much more swiftly than the US government.

My opinion would change if regulation doesn't play a large role in how successful an AI pause is, eg if industry players could voluntarily practice restraint. There are also other factors that I'm not considering.

I am an undergraduate majoring in applied math and I am trying to spin towards alignment research. I am finishing up a course in the math of machine learning offered by UIUC. Towards helping others who are in a similar position, I've been working on an article about who might be well-suited for the course, the concepts that you learn about, and ultimately how helpful the course is in becoming a better alignment researcher in comparison to other avenues. At this time I am still writing the post, but I am not very certain about the verity of even the core arguments that I make. So I hope that I can get feedback from others. If the article has misconceptions, the feedback would help me rectify them before I publish this for public viewing. (A potential counterpoint is that the voting mechanism would result in the prominence of the post organically being reduced if many people find fault with it.)

Send me a message or comment if you're interested. I appreciate anyone who'd be willing to provide feedback on this.

What is your thinking on how people should think about their intelligence when it comes to pursuing careers in AI safety? Also, what do you think about this in terms of field building?

I think that there are a lot of people who are "smart" but may not be super-geniuses like the next von Neumann or Einstein who might be interested in pursuing AI safety work, but are uncertain about how much impact they can really have. In particular, I can envision cases where one might enjoy thinking about thought experiments, reading research on the AI Alignment Forum, writing their own arguments, etc, but they might not be making valuable output for a year or more. (At the same time, I know there are cases where someone could become really productive whilst taking a year or more to reach this point.) What advice would you give to this kind of person in thinking about career choice? I am also curious how you think about outreach strategies for getting people into AI safety work. For example, the balance between trying to get the word out as much as possible and keeping outreach to lower scales so that only people who are really capable would be likely of learning about careers in AI safety.