Hide table of contents

Next week for The 80,000 Hours Podcast I'll be interviewing Nova Das Sarma.

She works to improve computer and information security at Anthropic, a recently founded AI safety and research company.

She's also helping to find ways to provide more compute for AI alignment work in general.

Here's an (outdated) LinkedIn and in-progress personal website, and an old EA Forum post from Claire Zabel and Luke Muehlhauser about the potential EA relevance of information security.

What should I ask her?

17

0
0

Reactions

0
0
New Answer
New Comment

13 Answers sorted by

What are the key companies Nova would like to help strenghten their computer security?

Any hot takes on the recent NVIDIA hack? Was it preventable? Was it expected? Any AI Safety implications?

Why is Anthropic working on computer security? What are the key computer security problems she thinks is prioritary to solve?

How worried is she about dual use of https://hofvarpnir.ai/  for capability development?

What AI Safety research lines are most bottlenecked on compute?

Is there any work on historical studies of leaks in the ML field? 

Would you like such a project to exist? What sources of information are there?

How large a portion of infosec risk is due to software/hardware issues and how large due to social engineering?

How important is compute for AI development relative to other inputs? How certain are you of this?

There have been estimates that there are around 100 AI researchers & engineers focused on AI alignment. This seems quite small given the scale of the problem. What are some of the bottlenecks for scaling up, and what is being done to alleviate this?

I've heard that there could be a trade-off between robust info security measures and hiring top talent for AI research. (I think the reasoning was something like: If state of the art AI research is a seller’s market and improving info security is inconvenient, some employees may be unwilling to comply with these measures and just take their talent elsewhere.) How accurate is this in your experience?

To what extent if any is centralisation/decentralisation useful in improving infosec?

The obvious way to reduce infosec risk is to beef up security. Another is to disincentivise actors from attacking in the first place. Are there any good ways of doing that (other than maybe criminal justice)?

What opportunities, if any at all, do individual donors (or people who might not have suitable backgrounds for safety/governance careers) have to positively shape the development of AI?

Comments2
Sorted by Click to highlight new comments since: Today at 8:25 PM

Thank you for asking this question on the forum! 

It has been somewhat frustrating to follow you on Facebook and seeing all these great people you were about to interview, without being able to contribute with anything.

[comment deleted]2y0
0
0
Curated and popular this week
Relevant opportunities