All of Conor Barnes 🔶's Comments + Replies

The claim that "biosecurity often selects for people who already have a wealth of experience in their respective fields" doesn't seem that obvious to me. Looking at the number of biosecurity roles we've posted on the 80,000 Hours job board in 2025, broken down by experience tag, I see:

  • Entry-level: 32
  • Entry-level + Junior (1-4 yrs): 12
  • Junior (1-4 yrs): 69
  • Junior (1-4 yrs) + Mid (5-9 yrs): 31
  • Mid (5-9 yrs): 73
  • Mid (5-9 yrs) + Senior (10+ yrs): 20
  • Senior (10+ yrs): 40
  • Multiple experience levels: 10

I agree with you that there aren't a ton of opportunities... (read more)

6
rjain 🔹
Hi Conor, thanks for your comment! Appreciate all the work you do on the 80k job board.  I will caveat by saying 1) my perspective is based on a job search in the US/Western biosecurity landscape, not a global one and 2) I drew on my own personal experiences and that of my friends' in attempting to find a credible full-time position (e.g. summer internship) in biosecurity.  During this job search, many of the entry-level and junior opportunities I scouted on biosecurity job boards tended to (and still do) fall into one of two categories:  1. Part-time projects/ 'test your fit'/Expression of Interest (EOI) forms; or 2. Full-time roles that are tagged as open to Junior (1-4 yrs) professionals, but are also open to senior-level candidates, and/or state a PhD or master's degree requirement.  Obviously, there are some exceptions and I would love to see someone do an analysis of entry-level biosecurity job postings over time. However, I would point out that it is still difficult to find jobs when moving from category #1 to #2, and if you go down the list of entry level positions today, it would be hard to find more than 5-10 positions that a young person without a PhD could be competitive for. I also think EA jobs are often competitive and agree with Peter's comment that this higher competitiveness lends itself to selecting more senior, experienced candidates (even when a junior position is available). I would love to see both more orgs (both established and new) run more summer internship/RA/entry-level positions and advertise them here more.  

Seems important to check whether the people hired actually fit into those experience requirements or have more experience. If the roles are very competitive then it could be much higher. 

3
Bryce Robertson
Thank you Conor :)

There are some great replies here from career advisors -- I'm not one, but I want to mention that I got into software engineering without a university degree. I'm hesitant to recommend software engineering as the safe and well-paying career it once was, but I think learning how to code is still a great way to quickly develop useful skills without requiring a four-year degree!

@Sudhanshu Kasewa has enlisted me for this one!

I think earning to give is a really strong option and indeed the best option for many people.

Lack of supply is definitely an issue, though it can be helped by looking for impactful opportunities outside of "EA orgs" per se -- I don't know if this is your scenario, but this is often a problem. Knowing nothing about a person's situation and location, I'd prompt:

  • Can you look for roles in government in order to work on impactful policy?
  • Are there start-ups or other learning opportunities you could apply to in or
... (read more)

A clarification: We would not post roles if we thought they were net harmful and were hoping that somebody would counterfactually do less harm. I think that would be too morally fraught to propose to a stranger.

Relatedly, we would not post a job where we thought that to have a positive impact, you'd have to do the job badly.

We might post roles if we thought the average entrant would make the world worse, but a job board user would make the world better (due to the EA context our applicants typically have!). No cases of this come to mind immediately though.... (read more)

1
Geoffrey Miller
Conor -- yes, I understand that you're making judgment calls about what's likely to be net harmful versus helpful. But your judgment calls seem to assume -- implicitly or explicitly -- that ASI alignment and control are possible, eventually, at least in principle.  Why do you assume that it's possible, at all, to achieve reliable long-term alignment of ASI agents?  I see no serious reason to think that it is possible. And I've never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, possible. And if ASI alignment isn't possible, then all AI 'safety research' at AI companies aiming to build ASI is, in fact, just safety-washing. And it all increases X risk by giving a false sense of security, and encouraging capabilities development. So, IMHO, 80k Hours should re-assess what it's doing by posting these ads for jobs inside AI companies -- which are arguably the most dangerous organizations in human history.

Hi Geoffrey,

I'm curious to know which roles we've posted which you consider to be capabilities development -- our policy is to not post capabilities roles at the frontier companies. We do aim to post jobs that are meaningfully able to contribute to safety and aren’t just safety-washing (and our views are discussed much more in depth here). Of course, we're not infallible, so if people see particular jobs they think are safety in name only, we always appreciate that being raised.

I strongly agree with @Bella's comment. I'd like to add:

  • I encourage job-seekers to not think of EA jobs as the one way to have impact in one's career. Almost all impactful roles are not at EA orgs. On the 80,000 Hours job board we try to find the most promising ones, but we won't catch everything!
  • Our co-worker Laura González Salmerón has a great talk on this topic.
  • Even if the movement is not talent-constrained, the problems we're trying to solve are talent-constrained. The world still needs way more people working on catastrophic risks, animal welfare, and
... (read more)
3
SiobhanBall
Hi Conor! Thanks for commenting. We met briefly at EAG Global.  - You won't catch everything, but your job board is an obvious go-to for people who believe in EA and want to plan their careers accordingly, as they've been encouraged to do.  - I find myself disagreeable on several of the points in Laura's post, in particular 'It's pure luck that an organisation asked someone who happens to know and remember me'. That's not luck (well, a bit). EA is still a sufficiently small community where positive reputation doesn't happen by accident, so that's merit - and more weight should be put on it!  - Earning to give is also a good 'catch all' for people who haven't got a position within an EA org. 

If your strategy is to just apply to open hiring rounds, such as through job ads that are listed on the 80,000 Hours job boards, you are cutting your chances of landing a role by ~half. It’s hard to know the exact figure, but I wouldn’t be surprised if as many as 30-50% of paid roles in the movement aren’t being recruited through traditional open hiring rounds ...


This is my impression as well, though heavily skewed by experience level. I'd estimate that >80%+ of senior "hires" in the movement occur without a public posting, and something like 20% of jun... (read more)

I really appreciated reading this. It captured a lot of how I feel when I think about having taken the pledge. It's astounding. I think it's worth celebrating, and assuming the numbers add up, I think it's worth grappling with the immensity of having saved a life.

Hey Manuel,

I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA work -- we think all of these have important roles to play in a world with a short timeline to AGI.

In terms of where we’ll be raising the bar, this will mostly affect global health, animal welfare, and climate postings — specifically ... (read more)

2
Manuel Allgaier
Thanks for the amount of detail and suggestions!  The trifecta might work if enough people are also aware of the other job boards (maybe worth linking somewhere in your job board?). It might also be relatively easy to scrape jobs from all three boards to combine them in one cause-neutral job board, but that seems only worth it if enough people actually check that job board, e.g. if it was linked on effectivealtruism.org or maybe School for Moral Ambition or other high traffic websites etc.  I won't look into this further but if anyone else here reading this wants to take this on, I'm happy to support if I can, just email me.

I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in.

I'm very glad 80,000 Hours is making this change. I'm not glad that we've entered the world where this change feels necessary.

To elaborate on the job board changes mentioned in the post:

  • We will continue listing
... (read more)
2
NickLaing
I think the post would have been far better if this kind of sentiment was front and center. Obviously its still only a softener but it shows understanding and empathy the CEO has missed. "I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in."
  1. Become conversational in Spanish so I can talk to my fianceé's family easily.
  2. Work out ten times per month (3x/week with leeway)
  3. Submit 12 short stories about transformative AI to publishers this year.

    More details here. Ongoing mission: get a literary agent for my novel!

One example I can think of with regards to people "graduating" from philosophies is the idea that people can graduate out of arguably "adolescent" political philosophies like libertarianism and socialism. Often this looks like people realizing society is messy and that simple political philosophies don't do a good job of capturing and addressing this.

However, I think EA as a philosophy is more robust than the above: There are opportunities to address the immense suffering in the world and to address existential risk, some of these opportunities are much mo... (read more)

7
AltForHonesty
Despite the people in the EA/rat-sphere dismissing socialism out of hand as an "adolescent" political philosophy, actual political philosophers who study this for a living are mostly socialists (socialism 59%, capitalism 27%, other 14%)
2
Ruben Dieleman 🔸
Thank you for this perspective. I very much agree with your last paragraph!

Hi there, I'd like to share some updates from the last month.

Text during last update (July 5)

  • OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.

Text as of today:

  • OpenAI is a frontier AI research and product company, with teams working on alignment, poli
... (read more)
9
Neel Nanda
This is a great update, thanks! Re the "concerns around HR practices" link, I don't think that Washington Post article is the best thing to link. That focuses on clauses stopping people talking to regulators which, while very bad, seems less "holy shit WTF" to me than the threatening people's previously paid compensation over non-disparagements thing. I think the best article on that is Kelsey Piper's (though OpenAI have seemingly mostly released people from those and corresponding threats, and Kelsey's article doesn't link to follow-ups discussing those). My metric here is roughly "if a friend of mine wanted to join OpenAI, what would I warn them about" rather than "which is objectively worse for the world", and I think 'they are willing to threaten millions of dollars of stock you have already been paid' is much more important to warn about.

Yeah, this does seem like an improvement. I appreciate you thinking about it and making some updates.

I interpreted the title to mean "Is it a good idea to take an unpaid UN internship?", and it took a bit to realize that isn't the point of the post. You might want to change the title to be clear about what part of the unpaid UN internship is the questionable part!

1
Cipolla
Right. You saw it from a point of view of an individual, and not the concept. Thanks.

Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.

1
Linda Linsefors
I can't find the disclaimer. Not saying it isn't there. But it should be obvious from just skimming the page, since that is what most people will do. 
9
Raemon
Following up my other comment: To try to be a bit more helpful rather than just complaining and arguing: when I model your current worldview, and try to imagine a disclaimer that helps a bit more with my concerns but seems like it might work for you given your current views, here's a stab. Changes bolded. (it's not my main crux, by "frontier" felt both like a more up-to-date term for what OpenAI does, and also feels more specifically like it's making a claim about the product than generally awarding status to the company the way "leading" does)
2
Raemon
Thanks. This still seems pretty insufficient to me, but, it's at least an improvement and I appreciate you making some changes here.

(Copied from reply to Raemon)

Yeah, I think this needs updating to something more concrete. We put it up while ‘everything was happening’ but I’ve neglected to change it, which is my mistake and will probably prioritize fixing over the next few days.

Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we don’t list because we lack confidence they’re safety-focused.

For the alignment role in question, I think the team description given at the top of the post gives important context for the role’s responsibilities:

OpenAI’s Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to direct... (read more)

Thanks.

Fwiw while writing the above, I did also think "hmm, I should also have some cruxes for 'what would update me towards 'these jobs are more real than I currently think.'" I'm mulling that over and will write up some thoughts soon.

It sounds like you basically trust their statements about their roles. I appreciate you stating your position clearly, but, I do think this position doesn't make sense:

  • we already have evidence of them failing to uphold commitments they've made in clear cut ways. (i.e. I'd count their superalignment compute promises as basica
... (read more)
8
Conor Barnes 🔶
Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.

The arguments you give all sound like reasons OpenAI safety positions could be beneficial. But I find them completely swamped by all the evidence that they won't be, especially given how much evidence OpenAI has hidden via NDAs.

But let's assume we're in a world where certain people could do meaningful safety work an OpenAI. What are the chances those people need 80k to tell them about it? OpenAI is the biggest, most publicized AI company in the world; if Alice only finds out about OpenAI jobs via 80k that's prima facie evidence she won't make a contributio... (read more)

Hi, I run the 80,000 Hours job board, thanks for writing this out! 

I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because th... (read more)

9
Remmelt
This misses aspects of what used to be 80k's position: ❝ In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.  – Benjamin Hilton, February 2024 ❝ Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely. So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections. – Benjamin Hilton, June 2023 - still on website 80k was listing some non-safety related jobs: – From my email on May 2023: – From my comment on February 2024:
6
Raemon
I have slightly complex thoughts about the "is 80k endorsing OpenAI?" question. I'm generally on the side of "let people make individual statements without treating it as a blanket endorsement."  In practice, I think the job postings will be read as an endorsement by many (most?) people. But I think the overall policy of "social-pressure people to stop making statements that could be read as endorsements" is net harmful.  I think you should at least be acknowledging the implication-of-endorsement as a cost you are paying. I'm a bit confused about how to think about it here, because I do think listing people on the job site, with the sorts of phrasing you use, feels more like some sort of standard corporate political move than a purely epistemic move.  I do want to distinguish the question of "how does this job-ad funnel social status around?" from "does this job-ad communicate clearly?". I think it's still bad to force people only speak words that can't be inaccurately read into, but, I think this is an important enough area to put extra effort in. An accurate job posting, IMO, would say "OpenAI-in-particular has demonstrated that they do not follow through on safety promises, and we've seen people leave due to not feeling effectual." I think you maybe both disagree with that object level fact (if so, I think you are wrong, and this is important), as well as, well, that'd be a hell of a weird job ad. Part of why I am arguing here is I think it looks, from the outside, like 80k is playing a slightly confused mix of relating to orgs politically and making epistemic recommendations.  I kind of expect at this point you to leave the job ad up, and maybe change the disclaimer slightly in a way that is leaves some sort of plausibly-deniable veneer.

Insofar as you are recommending the jobs but not endorsing the organization, I think it would be good to be fairly explicit about this in the job listing. The current short description of OpenAI seems pretty positive to me:

OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. You can read more about considerations around working at a leading AI company in our career review on the topic. They are also currently the subject of news stories relating to their safety work. 

I think this should say someth... (read more)

I think that given the 80k brand (which is about helping people to have a positive impact with their career), it's very hard for you to have a jobs board which isn't kinda taken by many readers as endorsement of the orgs. Disclaimers help a bit, but it's hard for them to address the core issue — because for many of the orgs you list, you basically do endorse the org (AFAICT).

I also think it's a pretty different experience for employees to turn up somewhere and think they can do good by engaging in a good faith way to help the org do whatever it's doing, an... (read more)

8
Rebecca
Echoing Raemon, it’s still a value judgement about an organisation to say that 80k believes that a given role is one where, as you say, “they can contribute to solving our top problems or build career capital to do so”. You are saying that you have sufficient confidence that the organisation is run well enough that someone with little context of internal politics and pressures that can’t be communicated via a job board can come in and do that job impactfully. But such a person would be very surprised to learn that previous people in their role or similar ones at the company have not been able to do their job due to internal politics, lies, obfuscation etc, and that they may not be able to do even the basics of their job (see the broken promise of dedicated compute supply). It’s difficult to even build career capital as a technical researcher when you’re not given the resources to do your job and instead find yourself having to upskill in alliance building and interpersonal psychology.

These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this! 

I think given that these jobs involved being pressured via extensive legal blackmail into signing secret non-disparagement agreements that forced people to never criticize OpenAI, at great psychological stress and at substantial cost to many outsiders who were trying to assess OpenAI, I don't agree with this assessment. 

Safety people have been substantially harmed by working at OpenAI, and safety work at OpenAI can have substantial negative externalities.

Hey Conor!

Regarding

we don’t conceptualize the board as endorsing organisations.

And

 contribute to solving our top problems or build career capital to do so

It seems like EAs expect the 80k job board to suggest high impact roles, and this has been a misunderstanding for a long time (consider looking at that post if you haven't). The disclaimers were always there, but EAs (including myself) still regularly looked at the 80k job board as a concrete path to impact.

I don't have time for a long comment, just wanted to say I think this matters.

Raemon
84
27
0
1

Nod, thanks for the reply.

I won't argue more for removing infosec roles at the moment. As noted in the post, I think this is at least a reasonable position to hold. I (weakly) disagree, but for reasons that don't seem worth getting into here.

The things I'd argue here:

  • Safetywashing is actually pretty bad, for the world's epistemics and for EA and AI safety's collective epistemics. I think it also warps the epistemics of the people taking the job, so while they might be getting some career experience... they're also likely getting a distorted view of what wh
... (read more)

I think this is a good policy and broadly agree with your position.

It's a bit awkward to mention, but as you've said that you've delisted other roles at OpenAI and that OpenAI has acted badly before - I think you should consider explicitly saying that you don't necessarily endorse other roles at OpenAI and suspect that some other role may be harmful on the OpenAI jobs board cards.

I'm a little worried about people seeing OpenAI listed on the board and inferring that the 80k recommendation somewhat transfers to other roles at OpenAI (which, imo is a reasonable heuristic for most companies listed on the board - but fails in this specific case).

I find the Leeroy Jenkins scenario quite plausible, though in this world it's still important to build the capacity to respond well to public support.

Hi Remmelt,

Just following up on this — I agree with Benjamin’s message above, but I want to add that we actually did add links to the “working at an AI lab” article in the org descriptions for leading AI companies after we published that article last June.

It turns out that a few weeks ago the links to these got accidentally removed when making some related changes in Airtable, and we didn’t notice these were missing — thanks for bringing this to our attention. We’ve added these back in and think they give good context for job board users, and we’re certain... (read more)

0
Remmelt
Hi Conor, Thank you. I’m glad to see that you already linked to clarifications before. And that you gracefully took the feedback, and removed the prompt engineer role. I feel grateful for your openness here. It makes me feel less like I’m hitting a brick wall. We can have more of a conversation. ~ ~ ~ The rest is addressed to people on the team, and not to you in particular: There are grounded reasons why 80k’s approaches to recommending work at AGI labs – with the hope of steering their trajectory – has supported AI corporations to scale. While disabling efforts that may actually prevent AI-induced extinction. This concerns work on your listed #1 most pressing problem. It is a crucial consideration that can flip your perceived total impact from positive to negative. I noticed that 80k staff responses so far started by stating disagreement (with my view), or agreement (with a colleague’s view). This doesn’t do discussion of it justice. It’s like responding to someone’s explicit reasons for concern that they must be “less optimistic about alignment”. This ends reasoned conversations, rather than opens them up. Something I would like to see more of is individual 80k staff engaging with the reasoning.

I think this is a joke, but for those who have less-explicit feelings in this direction:

I strongly encourage you to not join a totalizing community. Totalizing communities are often quite harmful to members and being in one makes it hard to reason well. Insofar as an EA org is a hardcore totalizing community, it is doing something wrong.

6
Peter Berggren🔸
This was semi-serious, and maybe “totalizing” was the wrong word for what I was trying to say. Maybe the word I more meant was “intense” or “serious.” CLARIFICATION: My broader sentiment was serious, but my phrasing was somewhat exaggerated to get my point across.

I really appreciated reading this, thank you.

Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!

One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

I'm really sorry you're experiencing this. I think it's something more and more people are contending with, so you aren't alone, and I'm glad you wrote this. As somebody who's had bouts of existential dread myself, there are a few things I'd like to suggest:

  1. With AI, we fundamentally do not know what is to come. We're all making our best guesses -- as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
... (read more)
5
Conor Barnes 🔶
Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out! One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

I hadn't seen the previous dashboard, but I think the new one is excellent!

8
Ben_West🔸
Thanks! @Angelina Li deserves the credit :)

Thanks for the Possible Worlds Tree shout-out!

I haven't had capacity to improve it (and won't for a long time), but I agree that a dashboard would be excellent. I think it could be quite valuable even if the number choice isn't perfect.

"Give a man money for a boat, he already knows how to fish" would play off of the original formation!

2
christian.r
Thanks, Conor!

It's pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.

In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.

4
Arepo
What qualifies as 'a (sufficient) amount of value alignment'? I worked with many people who agreed with the premise of moving money to the worst off, and found the actual practices of many self-identifying EAs hard to fathom. Also, 'it's pretty common' strikes me as an insufficient argument - many practices are common and bad. More data seems needed.

I'm really glad to hear it! Polishing is ongoing. Replied on GH too!

2
Paolo Bova
Thanks for pushing the fix for Windows. The share buttons work on my device now.
  1. The probability of any one story being "successful" is very low, and basically up to luck, though connections to people with the power to move stories (ex. publishers, directors) would significantly help. 
  2. Most ex-risk scenarios are perfect material for compelling and entertaining stories. They tap into common tropes (hubris of humans and scientists), are near-future disaster scenarios, and can have opposed hawk and dove characters. I imagine that a successful ex-risk movie could have a narrative shaped like Jurassic Park or The Day After Tomorrow.
  3. My a
... (read more)

I love Possible Worlds Tree! It's aligned with the optimistic outlook, conveys the content better, and has a mythology pun. I couldn't be happier. Messaging re: bounty!

1
peterhartree
Not sure about this one. Main concerns: 1. Too long 2. Most people don't know the phrase possible worlds in the philosophy/logical sense. The more natural interpretation may be fine. Overall my take is that "Possibility Tree" is better.

Thanks for all the feedback! I think the buffs to interactivity are all great ideas. They should mostly be implemented this week. 

4
Paolo Bova
Great to see the Predict feature. I might have missed this when you first added it, but I've seen it now. It looks great and the tool is easy to use! I also like the additional changes you've made to make the site more polished. Myself and a friend had some issues when clicking the 'share' button which I'll post as an issue on the Github later.

A positive title would definitely help! I'll think on this.

Agreed. I think it needs a 'name' as a symbol, but the current one is a little fudged. My placeholder for a while was 'the tree of forking paths' as a Borges reference, but that was a bit too general...

This isn't exactly what I'm looking for (though I do think that concept needs a word). 

 

The way I'm conceptualizing it right now is that there are three non-existential outcomes:

1. Catastrophe
2. Sustenance  / Survival
3. Flourishing 

If you look at Toby Ord's prediction, he includes a number for flourishing, which is great. There isn't a matching prediction in the Ragnarok series, so I've squeezed 2 and 3 together as a "non-catastrophe" category.  

Thanks! I hadn't thought of user interviews, that's a great idea!

Thank you! And yeah, this is an artifact of the green nodes being filled in from the implicit  inverse percent of the Ragnarok prediction and not having its own prediction. I could link to somewhere else, but it would need to be worth breaking the consistency of the links (all Metaculus Ragnarok links).

There's good discussion happening in the Discord if you want to hop in there!

Nice!! This is pretty similar to a project Nuño Sempere and I are are working on, inspired by this proposal:

https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=vi7zALLALF39R6exF

I'm currently building the website for it while Nuño works on the data. I suspect these are compatible projects and  there's an effective way to link up! 

1
Mo Putera
I realise I'm responding to an old comment, but was this post of Nuno's the product of your project?
4
Nathan Young
Also happy to give support on this if I can. 
4
Elliot Olds
Awesome! (Ideopunk and I are chatting on discord and likely having a call tomorrow.)

Location: Halifax, Canada

Remote: Yes

Willing to relocate: No

Skills:
- Tech: JavaScript/TypeScript, CSS, React, React Native, Node, Go, Rust
- Writing: Ex. https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life. Also see https://conorbarnes.com/blog

Resume: Portfolio with resume link! https://conorbarnes.com/work

Email: conorbarnes93@gmail.com

Notes:
- Preferably full-time.

- Cause neutral.

- Availability: Anytime!

- Role: Web dev / software engineering

- EA Background: 
-- Following since 2015.
-- Giving What We Can pledge since 2019.
-- 1Day ... (read more)

I second interest in a private submission / private forum option! I intend to submit my entry to a few places soon, but that won't be possible if it's "published" by submitting it here. If there isn't a private option I probably won't submit here.

4
Aaron Gertler 🔸
Here's our new private submission form!