All of Conor Barnes's Comments + Replies

Hi Remmelt,

Just following up on this — I agree with Benjamin’s message above, but I want to add that we actually did add links to the “working at an AI lab” article in the org descriptions for leading AI companies after we published that article last June.

It turns out that a few weeks ago the links to these got accidentally removed when making some related changes in Airtable, and we didn’t notice these were missing — thanks for bringing this to our attention. We’ve added these back in and think they give good context for job board users, and we’re certain... (read more)

0
Remmelt
2mo
Hi Conor, Thank you. I’m glad to see that you already linked to clarifications before. And that you gracefully took the feedback, and removed the prompt engineer role. I feel grateful for your openness here. It makes me feel less like I’m hitting a brick wall. We can have more of a conversation. ~ ~ ~ The rest is addressed to people on the team, and not to you in particular: There are grounded reasons why 80k’s approaches to recommending work at AGI labs – with the hope of steering their trajectory – has supported AI corporations to scale. While disabling efforts that may actually prevent AI-induced extinction. This concerns work on your listed #1 most pressing problem. It is a crucial consideration that can flip your perceived total impact from positive to negative. I noticed that 80k staff responses so far started by stating disagreement (with my view), or agreement (with a colleague’s view). This doesn’t do discussion of it justice. It’s like responding to someone’s explicit reasons for concern that they must be “less optimistic about alignment”. This ends reasoned conversations, rather than opens them up. Something I would like to see more of is individual 80k staff engaging with the reasoning.

I think this is a joke, but for those who have less-explicit feelings in this direction:

I strongly encourage you to not join a totalizing community. Totalizing communities are often quite harmful to members and being in one makes it hard to reason well. Insofar as an EA org is a hardcore totalizing community, it is doing something wrong.

6
Peter Berggren
4mo
This was semi-serious, and maybe “totalizing” was the wrong word for what I was trying to say. Maybe the word I more meant was “intense” or “serious.” CLARIFICATION: My broader sentiment was serious, but my phrasing was somewhat exaggerated to get my point across.

I really appreciated reading this, thank you.

Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!

One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

I'm really sorry you're experiencing this. I think it's something more and more people are contending with, so you aren't alone, and I'm glad you wrote this. As somebody who's had bouts of existential dread myself, there are a few things I'd like to suggest:

  1. With AI, we fundamentally do not know what is to come. We're all making our best guesses -- as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
... (read more)
5
Conor Barnes
8mo
Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out! One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

I hadn't seen the previous dashboard, but I think the new one is excellent!

8
Ben_West
9mo
Thanks! @Angelina Li deserves the credit :)

Thanks for the Possible Worlds Tree shout-out!

I haven't had capacity to improve it (and won't for a long time), but I agree that a dashboard would be excellent. I think it could be quite valuable even if the number choice isn't perfect.

"Give a man money for a boat, he already knows how to fish" would play off of the original formation!

2
christian.r
1y
Thanks, Conor!

It's pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.

In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.

4
Arepo
1y
What qualifies as 'a (sufficient) amount of value alignment'? I worked with many people who agreed with the premise of moving money to the worst off, and found the actual practices of many self-identifying EAs hard to fathom. Also, 'it's pretty common' strikes me as an insufficient argument - many practices are common and bad. More data seems needed.

I'm really glad to hear it! Polishing is ongoing. Replied on GH too!

2
Paolo Bova
2y
Thanks for pushing the fix for Windows. The share buttons work on my device now.
  1. The probability of any one story being "successful" is very low, and basically up to luck, though connections to people with the power to move stories (ex. publishers, directors) would significantly help. 
  2. Most ex-risk scenarios are perfect material for compelling and entertaining stories. They tap into common tropes (hubris of humans and scientists), are near-future disaster scenarios, and can have opposed hawk and dove characters. I imagine that a successful ex-risk movie could have a narrative shaped like Jurassic Park or The Day After Tomorrow.
  3. My a
... (read more)

I love Possible Worlds Tree! It's aligned with the optimistic outlook, conveys the content better, and has a mythology pun. I couldn't be happier. Messaging re: bounty!

1
peterhartree
2y
Not sure about this one. Main concerns: 1. Too long 2. Most people don't know the phrase possible worlds in the philosophy/logical sense. The more natural interpretation may be fine. Overall my take is that "Possibility Tree" is better.

Thanks for all the feedback! I think the buffs to interactivity are all great ideas. They should mostly be implemented this week. 

4
Paolo Bova
2y
Great to see the Predict feature. I might have missed this when you first added it, but I've seen it now. It looks great and the tool is easy to use! I also like the additional changes you've made to make the site more polished. Myself and a friend had some issues when clicking the 'share' button which I'll post as an issue on the Github later.

A positive title would definitely help! I'll think on this.

Agreed. I think it needs a 'name' as a symbol, but the current one is a little fudged. My placeholder for a while was 'the tree of forking paths' as a Borges reference, but that was a bit too general...

This isn't exactly what I'm looking for (though I do think that concept needs a word). 

 

The way I'm conceptualizing it right now is that there are three non-existential outcomes:

1. Catastrophe
2. Sustenance  / Survival
3. Flourishing 

If you look at Toby Ord's prediction, he includes a number for flourishing, which is great. There isn't a matching prediction in the Ragnarok series, so I've squeezed 2 and 3 together as a "non-catastrophe" category.  

Thanks! I hadn't thought of user interviews, that's a great idea!

Thank you! And yeah, this is an artifact of the green nodes being filled in from the implicit  inverse percent of the Ragnarok prediction and not having its own prediction. I could link to somewhere else, but it would need to be worth breaking the consistency of the links (all Metaculus Ragnarok links).

There's good discussion happening in the Discord if you want to hop in there!

Nice!! This is pretty similar to a project Nuño Sempere and I are are working on, inspired by this proposal:

https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=vi7zALLALF39R6exF

I'm currently building the website for it while Nuño works on the data. I suspect these are compatible projects and  there's an effective way to link up! 

4
Nathan Young
2y
Also happy to give support on this if I can. 
4
Elliot_Olds
2y
Awesome! (Ideopunk and I are chatting on discord and likely having a call tomorrow.)

Location: Halifax, Canada

Remote: Yes

Willing to relocate: No

Skills:
- Tech: JavaScript/TypeScript, CSS, React, React Native, Node, Go, Rust
- Writing: Ex. https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life. Also see https://conorbarnes.com/blog

Resume: Portfolio with resume link! https://conorbarnes.com/work

Email: conorbarnes93@gmail.com

Notes:
- Preferably full-time.

- Cause neutral.

- Availability: Anytime!

- Role: Web dev / software engineering

- EA Background: 
-- Following since 2015.
-- Giving What We Can pledge since 2019.
-- 1Day ... (read more)

I second interest in a private submission / private forum option! I intend to submit my entry to a few places soon, but that won't be possible if it's "published" by submitting it here. If there isn't a private option I probably won't submit here.

4
Aaron Gertler
3y
Here's our new private submission form!