LR

Logan Riggs

83 karmaJoined Aug 2021

Comments
5

I've heard that AI Safety Support is planning to expand its operations a lot. Are there other operations roles already available? 
 

I really like the window of opportunity idea.

I am talking to Vael currently thanks to a recommendation from someone else. If there’s other people you know or sources of failed attempts in the past, I’d also appreciate those!

I also agree that a set of really good arguments is great to have but not always sufficient.

Although convincing the top few researchers is important, also convincing the bottom 10,000’s is also important for movement building. The counter argument of “we can’t handle that many people switching careers” is to scale our programs.

Another is just trusting them to figure it out themselves (I want to compare with COVID research, but I’m not sure how well that research went or what incentives there were to make it better or worse), but this isn’t my argument but another’s intuition. I think an additional structure of “we can give quick feedback on your alignment proposal”would help with this.

Thanks for the in depth response!

The most valuable part of this project I’m interested in personally is a document with the best arguments for alignment and how to effectively go about these conversations (ie finding cruxes).

You made a logarithmic claim of improving capabilities, but my model is that 80% of progress is made by a few companies and top universities. Less than 1000 people are pushing general capabilities, so convincing these people to pivot (or the people in charge of these people’s research direction) is high impact.

You linked the debate between AI researchers, and I remember being extremely disappointed in the way the debate was handled (eg why is Stuart using metaphors? Though I did appreciate Yoshua’s responses). The ideal product I’m thinking of says obvious things like “don’t use metaphors as arguments” and “don’t have a 10 person debate” and “be kind”, along with the actual arguments to present and the most common counter arguments.

This could have negative effects if done wrong, so the next step is to practice on lower stakes people while building the argument-doc. Then, higher stakes people can be approached.

Additionally, a list of why certain “obvious solutions to alignment” fails is useful for pointing out dead-ends in research. For example, any project that relies on orthogonality thesis being wrong is doomed to fail imo.

This is a tangent: The links for scaling alignment are very inadequate, (though I’m very glad they exist!). MLAB had what? 30/500 applicants accepted. AISC had 40/200 accepted (and I talked to one rejected who was very high quality!) Richards course is scaling much faster though and I’m excited about that. I do believe that none of the courses handle “how to do great research” unless you do a mentorship, but I think we can work around that.

You can still have a conference for AI safety specifically and present at both conferences, with a caveat. From NeurIPS:
> Can I submit work that is in submission to, has been accepted to, or has been published in a non-archival venue (e.g. arXiv or a workshop without any official proceedings)? Answer: Yes, as long as this does not violate the other venue's policy on dual submissions (if it has one).

The AI Safety conference couldn't have an official proceeding. This would still be great for networking and disseminating ideas, which is definitely worth it.

My brother has written several books and currently coaches people on how to publish it and market it on Amazon. He would be open to being paid for advice in this area (just dm me)

I think the dissemination and prestige are the best arguments so far.