Hide table of contents

Summary

I've built a site hosting an EA alternative to the Doomsday Clock.

Doom

The Doomsday Clock is the most prominent symbol of existential risk. While it's helped highlight the risks of nuclear war, it's misaligned with EA values in various ways:

  • It does not use quantified probabilities. "100 seconds to midnight" has an ordinal relationship to "90 seconds to midnight" but is not itself a prediction of anything.
  • It is fundamentally pessimistic. A doomsday clock implies an inevitable countdown. One can turn back the clock, but midnight still looms.
  • It focuses on nuclear war and climate change, whereas EAs tend to see more existential risk in AI.

With this in mind, I thought about what effective altruists would make as an alternative symbol if we got to choose the main symbol of existential risk. Then I made it. Check it out and let me know what you think, I'm hoping to promote it further. I also have open questions below.

Potential

The X-Risk Tree is a symbol of the branching possibilities facing humanity. Its primary audience are people concerned by global catastrophic risk but who feel unable to do anything about it. I think there's a chunk of people in this situation, especially among environmentalists. The tree is intended to show that we can prune the branches of our future, that we have the agency to choose a path that avoids doom. Ideally it feels like an interactive display at a museum.

The numbers are sourced from Metaculus's Ragnarok series. I believe this is an important step up over the Doomsday Clock's non-quantified predictions. However, there are still issues with this approach, as Linch has pointed out.

Note that alternative predictions from EAs are included on the collections page.

Ongoing Questions

  • Would people enjoy being able to input data to generate a tree of their own predictions?
  • Would a sharing option for social media (image of tree and text of predictions) be useful?
  • I am not totally happy with the title. If you can convince me of a better one, I will provide a $100 bounty.
  • What could be high-leverage ways to promote it?
  • What could be done on the site to more thoroughly communicate the idea of "existential risk is serious but we can work on it"?
  • Is there a better word than 'sustenance' for outcomes where humanity does not suffer a global catastrophe?

Credits

This was made possible by a Long-Term Future Fund grant.

Linch's feedback was very helpful.

MichaelA's Database of existential risk estimates was crucial for the Collections page.

By The Way!

Shameless plug: If you need a developer, I am currently working for work!

52

0
0

Reactions

0
0
Comments27


Sorted by Click to highlight new comments since:

Another thought is that the title "x-risk tree" is slightly misleading:

  • The two things I think it visualises are drops in global population of 10% or 95% before 2100
  • So it doesn't visualise the risk of extinction (although it does provide an upper bound)
  • It also doesn't visualise existential risk (x-risk), which could be much higher than extinction risk, so the upper bound doesn't hold

How about replacing the title with something like "How likely is a global catastrophe in our lifetimes?"

Agreed. I think it needs a 'name' as a symbol, but the current one is a little fudged. My placeholder for a while was 'the tree of forking paths' as a Borges reference, but that was a bit too general...

What about something relating to hope? Say "the Tree of Hope". Combining the positive "tree" and the more negative "x-risk" may be slightly odd. But it depends on what framing you want to go for.

A positive title would definitely help! I'll think on this.

Looks nice!!

Thanks for making this, it looks great! Visualizations like this are great for explaining the importance of x-risk and GCR mitigation efforts, providing an intuitive way of understanding the associated probabilities. 

One recommendation would be to be to make the non-selected paths more transparent when 'survival' or 'extinction' is selected this would make the different cases more obvious. 

In terms of what other names might be. The image is round so words that come to mind when I viewed it where barometer or compass. I think both fit in terms of what the visualization is doing, either showing the state of risk like a barometer shows the air pressure, or providing information to (hopefully) steer to better futures similar to a compass.  

Beautifully made! I love the visuals and my first impressions are that it communicates x-risk in a more hopeful way. The app looks great on mobile too.

Some quick thoughts:

- I anticipated that clicking on a node would either give me a tooltip to explain what that particular node should represent or take me to another page/section of the site which explained these scenarios in more detail. 
- I initially found it strange that all of the green nodes appear to link to the same prediction about population decline. I vaguely understood that this was a source of evidence for the number of green nodes, but the connection is not very clear. I think the app might benefit from a short explanation of why a user might want to click on these nodes. It might also help if hovering over one node highlighted all nodes which send you to the same place.
- I feel that the text on the graph is sufficient enough for me to understand the different clusters in the graph. Yet, I wonder if it might look better to use icons to represent these different clusters, and have the longer text appear on hover instead. Of course, I'd keep it as it is if user testing suggested that this change increased confusion.
- I will cast a vote for being able to input my own data. If I could input my own data, I also think it would be fun to share the resulting graphs.
- I don't think I have any ideas for a better title. I do feel that another title should aim to be of a similar length.
- A few ideas for promoting the app to other EAs. It might be nice to give a talk about the web app, or for someone whose work is closely related to predictions for x-risk to show it off in a talk. Also, perhaps you could reach out to one of the university EA groups to see if they'd be interested in having a visual like this to show in some of their introductory talks.

Lastly, I'd like to congratulate you on launching the site. I'm sure you've put in a lot of work to get it to this point, and as a result it looks fantastic! 

Thanks for all the feedback! I think the buffs to interactivity are all great ideas. They should mostly be implemented this week. 

Great to see the Predict feature. I might have missed this when you first added it, but I've seen it now. It looks great and the tool is easy to use! I also like the additional changes you've made to make the site more polished. Myself and a friend had some issues when clicking the 'share' button which I'll post as an issue on the Github later.

I'm really glad to hear it! Polishing is ongoing. Replied on GH too!

Thanks for pushing the fix for Windows. The share buttons work on my device now.

This is cool, thanks for doing this

  • Is there a better word than 'sustenance' for outcomes where humanity does not suffer a global catastrophe?

There is some discussion here about such a term
 

This isn't exactly what I'm looking for (though I do think that concept needs a word). 

 

The way I'm conceptualizing it right now is that there are three non-existential outcomes:

1. Catastrophe
2. Sustenance  / Survival
3. Flourishing 

If you look at Toby Ord's prediction, he includes a number for flourishing, which is great. There isn't a matching prediction in the Ragnarok series, so I've squeezed 2 and 3 together as a "non-catastrophe" category.  

Lovely idea, lovely presentation, neglected area!

Quick impressions: Toggling the survival/extinction button wasn't clear at first. I thought each branch was going to be a link to an end scenario, imagine my surprise when I clicked on one of the sustenance branches and was linked to the decimation of our civilization.

Thank you! And yeah, this is an artifact of the green nodes being filled in from the implicit  inverse percent of the Ragnarok prediction and not having its own prediction. I could link to somewhere else, but it would need to be worth breaking the consistency of the links (all Metaculus Ragnarok links).

I am delighted that someone is finally having a crack at this. I've had this idea on my longlist for many years.

Thoughts on naming:

  1. Given that x-risk is a specialist term, and this is a public outreach thing, that's an argument for a more familiar term.

Brainstorm:

  1. Disaster Tree

  2. Tree of Fear and Hope

  3. 10,000 year tree

  4. Future Tree

  5. Possibility Tree

  6. Future Garden

I don't love any of these but maybe (6) is the best.

On (7): you could shift from tree to garden and then have roots for beautiful plants and dead plants, or something.

  1. There are marketing people who specialise in naming. Since the name is so critical to the project, it might be worth hiring a world-class professional to help. There's one guy I know personally (I'll ask him and email you if he's interested). Otherwise it's possible that studiomast.co could help (I don't know them, but someone at Founders Pledge recommended them for branding and visual identity).

I was inspired to brainstorm by your list. 🙂

Tree of Possible Futures 
Survival Tree 
Survival Map
 Our Future of Fire or Ashes 
Branches of Light and Darkness 
Branches of Life and Death 
Map of Cliffs and Crossings 
Tree of Paths Forward 
Navigating the Future 
Map of Futures 
Possible Worlds Tree 
     (pun on Yggdrasil, the world tree) 
The Choices Before Us 
What Lies Ahead 
The Branches Of Time 
Our Branching Futures 
Hopes and Endings 
Tree of Tomorrows

I love Possible Worlds Tree! It's aligned with the optimistic outlook, conveys the content better, and has a mythology pun. I couldn't be happier. Messaging re: bounty!

Not sure about this one. Main concerns:

  1. Too long
  2. Most people don't know the phrase possible worlds in the philosophy/logical sense. The more natural interpretation may be fine.

Overall my take is that "Possibility Tree" is better.

In other news I nearly wrote "Pissability Tree" here.

Of the other suggestions in the thread, I think Schubert's "Tree of Hope" is the best.

I like the idea of visualising important things to make them feel more salient, and it's fun that this is linked to predictions on Metaculus! I also liked the visualisation of other predictions once I found them. Thanks for making it.

You mention that the purpose is to give doomy people a sense that there is hope and we can take action to survive. I would be very interested for you to find some of these people and do user interviews or similar to understand whether it has the effect you hope! And you might learn how to improve it for that goal. Have you done anything like this yet?

Thanks! I hadn't thought of user interviews, that's a great idea!

At some point it'd be worth hiring a professional designer and illustrator to develop the idea. Plausibly an actual tree with roots going downwards would be easier to understand, visually.

Perhaps tree at bottom, then branches instead of roots. That feels more optimistic to me.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while