J

jia

Ops @ Anthropic
175 karmaJoined May 2018Working (0-5 years)

Bio

Ops generalist at Anthropic.

Comments
11

Answer by jiaMay 27, 202213
0
0

Anthropic is hiring for 10+ roles, including several operations roles: biz ops, executive assistant, ops generalist, and a recruiting coordinator.

+1 to Michael's comment. Thanks for doing what you're doing David (:

Hi Ben,

I'll be starting as an operations and research assistant at the Centre on Long-Term Risk in a few months, where I'll probably help out with AI governance topics related to multi-agent RL.

But I'm open to doing courses that are generally useful and engaging!

jia
3y12
0
0

Do people have online courses to recommend?

I have ~2 months off and am considering intro courses in stats and probability, game theory, or data science. I'm open to other recommendations, of course!

Answer by jiaNov 26, 20209
0
0

On his recent interview with FLI, Andrew Critch talks about overlaps between AI safety and current issues, and the difference between AI safety and existential safety/risk. Many (but not all) AI safety issues are relevant to current systems, so people who care about x-risks could focus on safety issues that are novel to advanced systems.

If you take a random excerpt of any page from [Aligning Superintelligence with Human Interests] and pretend that it’s about the Netflix challenge or building really good personal assistants or domestic robots, you can succeed. That’s not a critique. That’s just a good property of integrating with research trends. But it’s not about the concept of existential risk. Same thing with Concrete Problems in AI Safety.

In fact, it’s a fun exercise to do. Take that paper. Pretend you think existential risk is ridiculous and read Concrete Problems in AI Safety. It reads perfectly as you don’t need to think about that crazy stuff, let’s talk about tipping over vases or whatever. And that’s a sign that it’s an approach to safety that it’s going to be agreeable to people, whether they care about x-risk or not...

...So here’s a problem we have. And when I say we, I mean people who care about AI existential safety. Around 2015 and 2016, we had this coming out of AI safety as a concept. Thanks to Amodei and the Robust and Beneficial AI Agenda from Stuart Russell, talking about safety became normal. Which was hard to accomplish before 2018. That was a huge accomplishment.

And so what we had happen is people who cared about extinction risk from artificial intelligence would use AI safety as a euphemism for preventing human extinction risk. Now, I’m not sure that was a mistake, because as I said, prior to 2018, it was hard to talk about negative outcomes at all. But it’s at this time in 2020 a real problem that you have people … When they’re thinking existential safety, they’re saying safety, they’re saying AI safety. And that leads to sentences like, “Well, self driving car navigation is not really AI safety.” I’ve heard that uttered many times by different people.

Lucas Perry: And that’s really confusing.

Andrew Critch: Right. And it’s like, “Well, what is AI safety, exactly, if cars driven by AI, not crashing, doesn’t count as AI safety?” I think that as described, the concept of safety usually means minimizing acute risks. Acute meaning in space and time. Like there’s a thing that happens in a place that causes a bad thing. And you’re trying to stop that. And the Concrete Problems in AI Safety agenda really nailed that concept.

A few other resources about bridging the long-term and near-term divide:

Thanks for writing this post! It's cool to see people thinking about less direct, but potentially more neglected and tractable paths to affecting influential governments.

Do you have thoughts on the difference between intentional and unintentional diffusion?

  • Intentional. Country A actively tries to export its policies. Perhaps it is trying to establish itself as a leader on a specific issue, or shape global standards in a way that benefits local companies. The literature on "niche diplomacy" might be somewhat relevant here.
  • Unintentional. A is "doing its own thing", and isn't going out of its way to export its ideas. Possible implications:
    • The path to impact from A to influential country B is fuzzier.
    • It might be harder for someone working in A to tell who is learning from their policies. So we'd need people working in B to tell people in A that they're having an impact.

Also, getting people to pursue this path might be challenging because of things like status effects and people preferring to live in EA hubs.

Thanks for doing this!

People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.

Could you say more about the different skills and traits relevant to research project management?

yup! I tried to make this point in the section on trajectory: "Hypersonic missiles fly lower than ballistic missiles, which delays detection time by ground-based radar.". I'm trying to include the following photo to illustrate the point, but I can't seem to figure out how ):

https://imgur.com/a/Ai7Ny7q

Load more