Next week for The 80,000 Hours Podcast I'll be interviewing Nova Das Sarma.

She works to improve computer and information security at Anthropic, a recently founded AI safety and research company.

She's also helping to find ways to provide more compute for AI alignment work in general.

Here's an (outdated) LinkedIn and in-progress personal website, and an old EA Forum post from Claire Zabel and Luke Muehlhauser about the potential EA relevance of information security.

What should I ask her?

New Answer
Ask Related Question
New Comment

13 Answers sorted by

Jaime Sevilla

Mar 25, 2022

90

What are the key companies Nova would like to help strenghten their computer security?

Jaime Sevilla

Mar 25, 2022

50

Any hot takes on the recent NVIDIA hack? Was it preventable? Was it expected? Any AI Safety implications?

Jaime Sevilla

Mar 25, 2022

50

Why is Anthropic working on computer security? What are the key computer security problems she thinks is prioritary to solve?

Jaime Sevilla

Mar 25, 2022

50

How worried is she about dual use of https://hofvarpnir.ai/  for capability development?

Jaime Sevilla

Mar 25, 2022

50

What AI Safety research lines are most bottlenecked on compute?

Jaime Sevilla

Mar 26, 2022

40

Is there any work on historical studies of leaks in the ML field? 

Would you like such a project to exist? What sources of information are there?

Erich_Grunewald

Mar 25, 2022

40

How large a portion of infosec risk is due to software/hardware issues and how large due to social engineering?

JulianHazell

Mar 25, 2022

30

How important is compute for AI development relative to other inputs? How certain are you of this?

JulianHazell

Mar 25, 2022

30

There have been estimates that there are around 100 AI researchers & engineers focused on AI alignment. This seems quite small given the scale of the problem. What are some of the bottlenecks for scaling up, and what is being done to alleviate this?

Jide

Mar 26, 2022

10

I've heard that there could be a trade-off between robust info security measures and hiring top talent for AI research. (I think the reasoning was something like: If state of the art AI research is a seller’s market and improving info security is inconvenient, some employees may be unwilling to comply with these measures and just take their talent elsewhere.) How accurate is this in your experience?

Erich_Grunewald

Mar 25, 2022

10

To what extent if any is centralisation/decentralisation useful in improving infosec?

Erich_Grunewald

Mar 25, 2022

10

The obvious way to reduce infosec risk is to beef up security. Another is to disincentivise actors from attacking in the first place. Are there any good ways of doing that (other than maybe criminal justice)?

JulianHazell

Mar 25, 2022

10

What opportunities, if any at all, do individual donors (or people who might not have suitable backgrounds for safety/governance careers) have to positively shape the development of AI?

Comments2
Sorted by Click to highlight new comments since: Today at 7:14 PM

Thank you for asking this question on the forum! 

It has been somewhat frustrating to follow you on Facebook and seeing all these great people you were about to interview, without being able to contribute with anything.

[comment deleted]1y00