Vael Gates

1150 karmaJoined Jun 2021

Comments
53

Arkose is seeking an AI Safety Call Specialist who will be speaking with and supporting professors, PhD students, and industry professionals who are interested in AI safety research or engineering.

Salary: $75,000 - $95,000, depending on prior experience and location. This is currently a 9-month-long fixed contract.

Location: Remote (but we highly prefer candidates to be able to work in roughly US time zones).

Deadline: 30 March 2024, with rolling admission (early applications encouraged).

Learn more on our website, and apply here if you’re interested!

FAQ

This is cool! Why haven't I heard of this?
Arkose has been in soft-launch for a while, and we've been focused on email outreach more than public comms. But we're increasingly public, and are in communication with other AI safety fieldbuilding organizations! 

How big is the team?

3 people: Zach Thomas and Audra Zook are doing an excellent job in operations, and I'm the founder.

How do you pronounce "Arkose"? Where did the name come from?

I think whatever pronunciation is fine, and it's the name of a rock. We have an SEO goal for arkose.org to surpass the rock's Wikipedia page.

Where does your funding come from?
The Survival and Flourishing Fund.


Are you kind of like the 80,000 Hours 1-1 team?
Yes, in that we also do 1-1 support calls, and that there are many people for whom it'd make sense to do a call with both 80,000 Hours and Arkose! One key difference is that Arkose is aiming to specifically support mid-career people interested in getting more involved in technical AI safety. 

I'm not a mid-career person, but I'd still be interested in a call with you. Should I request a call?
Regretfully no, since we're currently focusing on professors, PhD students, or industry researcher or engineers who have AI / ML experience. This may expand in the future, but we'll probably still be pretty focused on mid-career folks. 

Is Arkose's Resource page special in any way?
Generally, our resources are selected to be most helpful to professors, PhD students, and industry professionals, which is a different focus than most other resource lists. We also think arkose.org/papers is pretty cool: it's a list of AI safety papers that you can filter by topic area. It's still in development and we'll be updating it over time (and if you'd like to help, please contact Vael!)

How can I help?
• If you know someone who might be a good fit for a call with Arkose, please pass along arkose.org to them! Or fill out our referral form.
• If you have machine learning expertise and would like to help us review our resources (for free or for pay), please contact vael@arkose.org.


Thanks everyone!

Neat! As someone who's not on the ground and doesn't know much about either initiative, I'm curious what Arcadia's relationship is London Initiative for Safe AI (LISA)? Mostly in the spirit of "if I know someone in AI safety in London, in what cases should I recommend them to each?"

This is sideways to the main point in the post, but I'm interested in a ticket type that's just "Swapcard / unsupported virtual attendee" where accepted people just get access to Swapcard, which lets them schedule 1-1 online videoconferencing, and that's it.

I find a lot of the value of EAG is in 1-1s, and I'd hope that this would be an option where virtual attendees can get potentially lots of networking value for very little cost.

(Asking because I don't want to pay a lot of money to attend an EAG where I'd mostly be taking on a mentor role, but I would potentially be happy to do some online 1-1s with people during a Schelling time.)

Update: Just learned about EAGxVirtual, which seems very relevant!

"For those applying for grants, asking for less money might make you more likely to be funded" 

My guess is that it's good to still apply for lots of money, and then you just may not be funded the full amount? And one can say what one would do with more or less money granted, so that the grantmakers can take that into account in their decision.

I didn't give a disagreement vote, but I do disagree on aisafety.training being the "single most useful link to give anyone who wants to join the effort of AI Safety research", just because there's a lot of different resources out there and I think "most useful" depends on the audience. I do think it's a useful link, but most useful is a hard bar to meet!

Not directly relevant to the OP, but another post covering research taste: An Opinionated Guide to ML Research (also see Rohin Shah's advice about PhD programs (search "Q. What skills will I learn from a PhD?") for some commentary.

Two authors gave me permission to publish their transcripts non-anonymously! Thus:

- Interview with Michael L. Littman

- Interview with David Duvenaud

Load more