CB

Conor Barnes

Software Engineer @ 80,000 Hours
139 karmaJoined Sep 2021

Bio

Substack shill @ parhelia.substack.com

Posts
7

Sorted by New

Comments
22

Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!

One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

I'm really sorry you're experiencing this. I think it's something more and more people are contending with, so you aren't alone, and I'm glad you wrote this. As somebody who's had bouts of existential dread myself, there are a few things I'd like to suggest:

  1. With AI, we fundamentally do not know what is to come. We're all making our best guesses -- as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
  2. For this reason, it can be useful to practice thinking through the models on your own. Start making your own guesses! I also often find the technical and philosophical details beyond me -- but that doesn't mean we can't think through the broad strokes. "How confident am I that instrumental convergence is real?" "Do I think evals for new models will become legally mandated?" "Do I think they will be effective at detecting deception?" At the least, this might help focus your content consumption instead of being an amorphous blob of dread -- I refer to it this way because I found the invasion of Ukraine sent me similarly reading as much as I could. Developing a model by focusing on specific, concrete questions (e.g. What events would presage a nuclear strike?) helped me transform my anxiety from "Everything about this worries me" into something closer to "Events X and Y are probably bad, but event Z is probably good".
  3. I find it very empowering to work on the problems that worry me, even though my work is quite indirect. AI safety labs have content writing positions on occasion. I work on the 80,000 Hours job board and we list roles in AI safety. Though these are often research and engineering jobs, it's worth keeping an eye out. It's possible that proximity to the problem would accentuate your stress, to be fair, but I do think it trades against the feeling of helplessness!
  4. C. S. Lewis has a take on dealing with the dread of nuclear extinction that I'm very fond of and think is applicable: ‘How are we to live in an atomic age?’ I am tempted to reply: ‘Why, as you would have lived in the sixteenth century when the plague visited London almost every year...’ 

 

I hope this helps!

I hadn't seen the previous dashboard, but I think the new one is excellent!

Thanks for the Possible Worlds Tree shout-out!

I haven't had capacity to improve it (and won't for a long time), but I agree that a dashboard would be excellent. I think it could be quite valuable even if the number choice isn't perfect.

"Give a man money for a boat, he already knows how to fish" would play off of the original formation!

It's pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.

In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.

I'm really glad to hear it! Polishing is ongoing. Replied on GH too!

  1. The probability of any one story being "successful" is very low, and basically up to luck, though connections to people with the power to move stories (ex. publishers, directors) would significantly help. 
  2. Most ex-risk scenarios are perfect material for compelling and entertaining stories. They tap into common tropes (hubris of humans and scientists), are near-future disaster scenarios, and can have opposed hawk and dove characters. I imagine that a successful ex-risk movie could have a narrative shaped like Jurassic Park or The Day After Tomorrow.
  3. My actionable advice is that EA writers and potential EA writers should write EA fiction alongside their other fiction and we should explore connections with publishers.

    As a side-note, I wrote an AI-escapes-the-box story the other week, and have since used Midjourney to illustrate it, as is fitting: https://twitter.com/Ideopunk/status/1553003805091979265. If anybody would like to read the first draft, message me! 
Load more