Hide table of contents

Hello everyone,

I'm a software engineer at a FAANG tech company and have recently become interested (and worried) by the existential risk posed by advanced artificial intelligence. I've looked at roles in AI alignment research suggested by 80,000 hours but don't want to relocate outside the Seattle Area and am not interested in leaving the industry to pursue a Ph.D.

I don't do any machine learning work in my current role but am spending the next year self-studying. However, an opportunity has presented itself to join a new team researching adversarial machine learning attacks and developing security products to counteract them. It seems logical that research and work done to make it more difficult for today's ML systems to be hacked could be helpful or at least help build a foundation for improving the security of AGI systems in the future. It seems to me that even if we're able to align AGI's values with our intentions, there are still risks posed by extremely powerful intelligence being hacked or tricked one way or another in the future.

What are your thoughts? Is this a promising way to have an impact in reducing AI x-risk, or is the only promising method to work in an AI research lab?

Thanks!

14

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

I think this a promising way to have an impact in reducing AI x-risk. I think the work itself would be useful and it also would be great training to learn more about the area. It's unrealistic to expect everyone to work in a top AI lab and much can be done outside these labs.

First, check out this post: https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers

 

Are you talking about adversarial ML/adversarial examples? If so, that is certainly an area that's relevant to long-term AI safety; e.g. many proposals for aligning AGI include some adversarial training. In general, I'd say many areas of ML have some relevance to safety, and it mostly depends on how you pick your research project within an area.

In their recent career profile, 80k hours suggest working as a SWE in an AI Safety research intuition. These institutions need good SWEs (and pay well), no ML required. I'd definitively consider that as well

https://forum.effectivealtruism.org/posts/gbPthwLw3NovHAJdp/new-80-000-hours-career-review-software-engineering

Comments1
Sorted by Click to highlight new comments since: Today at 4:23 AM
Mau
2y12
0
0

Thanks for the question! In case you haven't seen it yet, I think this post is relevant.

Also, speculatively and maybe unsurprisingly, I imagine you might be able to boost your impact significantly by being in contact with relevant labs, even if you don't relocate (e.g., making them aware of any neat new security tools you develop).

Curated and popular this week
Relevant opportunities