Conor Barnes

Job Board Developer @ 80,000 Hours
167 karmaJoined Sep 2021

Bio

Substack shill @ parhelia.substack.com

Posts
7

Sorted by New

Comments
25

Hi Remmelt,

Just following up on this — I agree with Benjamin’s message above, but I want to add that we actually did add links to the “working at an AI lab” article in the org descriptions for leading AI companies after we published that article last June.

It turns out that a few weeks ago the links to these got accidentally removed when making some related changes in Airtable, and we didn’t notice these were missing — thanks for bringing this to our attention. We’ve added these back in and think they give good context for job board users, and we’re certainly happy for more people to read our articles.

We also decided to remove the prompt engineer / librarian role from the job board, since we concluded it’s not above the current bar for inclusion. I don’t expect everyone will always agree with the judgement calls we make about these decisions, but we take them seriously, and we think it’s important for people to think critically about their career choices.

I think this is a joke, but for those who have less-explicit feelings in this direction:

I strongly encourage you to not join a totalizing community. Totalizing communities are often quite harmful to members and being in one makes it hard to reason well. Insofar as an EA org is a hardcore totalizing community, it is doing something wrong.

I really appreciated reading this, thank you.

Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!

One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

I'm really sorry you're experiencing this. I think it's something more and more people are contending with, so you aren't alone, and I'm glad you wrote this. As somebody who's had bouts of existential dread myself, there are a few things I'd like to suggest:

  1. With AI, we fundamentally do not know what is to come. We're all making our best guesses -- as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
  2. For this reason, it can be useful to practice thinking through the models on your own. Start making your own guesses! I also often find the technical and philosophical details beyond me -- but that doesn't mean we can't think through the broad strokes. "How confident am I that instrumental convergence is real?" "Do I think evals for new models will become legally mandated?" "Do I think they will be effective at detecting deception?" At the least, this might help focus your content consumption instead of being an amorphous blob of dread -- I refer to it this way because I found the invasion of Ukraine sent me similarly reading as much as I could. Developing a model by focusing on specific, concrete questions (e.g. What events would presage a nuclear strike?) helped me transform my anxiety from "Everything about this worries me" into something closer to "Events X and Y are probably bad, but event Z is probably good".
  3. I find it very empowering to work on the problems that worry me, even though my work is quite indirect. AI safety labs have content writing positions on occasion. I work on the 80,000 Hours job board and we list roles in AI safety. Though these are often research and engineering jobs, it's worth keeping an eye out. It's possible that proximity to the problem would accentuate your stress, to be fair, but I do think it trades against the feeling of helplessness!
  4. C. S. Lewis has a take on dealing with the dread of nuclear extinction that I'm very fond of and think is applicable: ‘How are we to live in an atomic age?’ I am tempted to reply: ‘Why, as you would have lived in the sixteenth century when the plague visited London almost every year...’ 

 

I hope this helps!

I hadn't seen the previous dashboard, but I think the new one is excellent!

Thanks for the Possible Worlds Tree shout-out!

I haven't had capacity to improve it (and won't for a long time), but I agree that a dashboard would be excellent. I think it could be quite valuable even if the number choice isn't perfect.

"Give a man money for a boat, he already knows how to fish" would play off of the original formation!

It's pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.

In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.

Load more