This is a linkpost for https://ourworldindata.org/longtermism

Hi everyone! I'm Max Roser from Our World in Data.

I wanted to share an article with you that I published this week: The Future is Vast: Longtermism’s perspective on humanity’s past, present, and future.

In it I try to convey some of the key ideas of longtermism in an accessible way—especially through visualizations like the ones below. 

I hope it makes these ideas more widely known and gets many more people interested in thinking what we can do now to make the long-term future much better.

 

We have written about some related topics for a long time (in particular war, nuclear war, infectious diseases, and climate change), but overall we want to do more work that is helpful for longtermists and those who work on the reduction of catastrophic & existential risks. To link to one example, I recently wrote this about the risk from nuclear weapons.

My colleagues Charlie Giattino and Edouard Mathieu are starting to work on visualizing data related to AI (e.g., this chart on AI training compute). 

Charlie, Ed, and I are sharing this here because we'd love to hear your thoughts about our work. We're always interested to hear your ideas for how OWID can be helpful for those interested in longtermism and effective altruism (here is Ed's earlier question on this forum).

212

0
0

Reactions

0
0
Comments16


Sorted by Click to highlight new comments since:

I assume this is redundant but I might as well check. Have you considered applying for FTX for funding to run the project which is modelled after your project already? Seems kind of a no brainer to avoid replication and use your tech/branding to deliver this, though I'm sure there are things I don't understand.

Thank you so much for doing this. I like the push to establish Longtermism as something outside of EA which I guess this is part of. 

I have a lot of respect for your work and find your non-partisan, numbers-focused approach really useful when discussing things with people.

I really enjoyed the article. A well-written, short introduction and great (as usual) visualisations which will likely see widespread use for conveying the scope of our future.

Personally, I didn't find the 17m * 4600km beach analogy for 625 quadrillion people super intuitive, and yes, I know, such numbers are basically never intuitive. A framing I found a bit easier to grasp compared the total possible number of humans to seconds in a whole year and said that the number of humans so far equals only a few seconds after midnight on new year or something. But that's just a tiny personal preference, you probably thought about such analogies a lot more.

Thanks for clearly presenting numbers and topics that are more difficult to convey, it's great!

I was really struggling to find a way to make this work. I should have asked you earlier! Time could be a very nice way to illustrate that. 

It would also work nicely with the metaphor of the earlier illustration in the post, the hour glass.

 

But I'm not sure it works nicely when I put numbers on it:

1 year are (60*60*24*365)=31,536,000 seconds

The point estimate for this year's global population is 7,953,952,577

So if 1 person equals 1 second then today's world population would be 7,953,952,577 /31,536,000=252.2 years.

And 625 quadrillion seconds are 625,000,000,000,000,000/31,536,000= 19,818,619,989.9 years. Almost 20 billion years. Way older than the Universe.

 

The numbers are so large that it is hard to make it work, no?

 

Making the time unit smaller would be another way to make this work. 

Just for the sake of it: 

One second is equal to 1,000,000,000 nanoseconds. One billion people are represented by each tick of a second.

So  today's population are 7,953,952,577 /1,000,000,000=7.95 seconds.

 

1 year  are (1,000,000,000*60*60*24*365)=31,536,000,000,000,000 nanoseconds.

This means the future population is represented by 625,000,000,000,000,000/31,536,000,000,000,000=19.8 years

So, if we go with the 1 person = 1 nanosecond illustration then today's world population is represented by 8 seconds and this future population would in contrast be 19.8 years.

That feels definitely more intuitive than the 1person=1second illustration, but it has the downside that no one has an intution of nanoseconds I guess.

 

 

What do you think? I like your idea of using time, but I find it hard to imagine 20 billion years and I also find it hard to have an intuition of nanoseconds (but maybe 1 billion people=1 second works).

 

Thanks for the idea! I'm not sure what I'm going to do, but it was fun to explore these numbers in this way.

Do you have another creative idea for how we could make this illustration work?

i think if the comparison you’re interested in is that between today’s population and the future population, it doesn’t really matter whether the thing representing 1 person is intuitive or not, so long as the things representing the two compared populations are intuitive.

Thanks for doing the calculations! I agree, not straightforward. But like Erich said, it was not about representing a single human. It was imagining humanity's "progress bar" (from first human to final, 600 quadrillionth human in a billion years) as one year. And humanity today being only 8 seconds or so into that year-long progress bar. The idea being that framing progress as seconds in a year is more intuitive than saying 0.0[...]01 %.

You could have a big clock and it could be just after midnight. Then there could be a cut away for the bit just after midnight saying "this is the time of all the humans that have every lived" with it cut up.

THen the rest could be coloured saying "this is all the future time of a conservative estimate of humans to live". 

Something like this, though I think it's pretty messy. A big clock face for the first hour and then others for the next 23

 

.

I loved this article! and have used it to explain my interests to family who aren't familiar/emotionally connected with longtermism. I also frequently used OWiD pieces (e.g. health + climate) when working in the FCDO - it became IMO the most credible and impartial source for providing new ideas & information to us, and I think OWiD can achieve this for longtermism-related data.

I wondered if it is possible to add a visualisation of a short animation: first, of the hourglass representing past and present (10 millions of) people, then zooming out to have a third section of the hourglass at the top, representing the future-people dripping in to the present-people section. For me,  this would be a more emotive visualisation of (a) the scale and (b) how connected we are to future people, than the existing two visualisations. 

This was fantastic, thanks for sharing! 

I think there're a lot of inferential steps most people would need to go through to get from their current worldview, to a longtermist worldview. But I think a pretty massive one is just getting people to appreciate how big the future could be, and I think this post does a great job of that.

An added bonus is that the idea that the future could be huge is a claim the longtermist community is particularly certain of  (whereas other important ideas, such as the likelihood of various existential risks and what we can do about them are extremely uncertain and contested). While quantifying how big the future could be, or is on expectation, is really difficult -- but the idea that it could be extremely big stands up to scrutiny quite well. I think it's really useful to have such beautifully illustrated graphs that put where humanity is now into context, I'm excited to use them for future work on longtermism at Giving What We Can.

RE something that would be useful for OWID on longtermism. I'd be very interested in approximate data on the amount of funding each year that gets directed to improving the very long-term future. Given there'd be a lot of difficult edge-cases here (e.g., should climate change funding be included?), it may need to be operationalised quite narrowly (perhaps "How much money do we spend each year on avoiding human extinction?" would be better.) 

Thanks. Very good to hear!

 

Yes, the question about tracking funding is one that is on our list – it'd be so helpful to understand this. But building and maintaining this would be quite a major undertaking. To do it well we'd need someone who can dedicate a lot of time and energy to it. And we are still a very small team, so realistically we won't be able to do that in the next few years.

Makes sense! From your appearance on the 80,000 Hours podcast, I was shocked by how much you have managed to do given you're such a small team. I'm really looking forward to seeing what you accomplish as you expand :) 

Customisable longtermist graph.

While I like the hourglass graph, I think it's possible that it underestimates the amount of conscious time that may yet be able to be lived. Is it worth having a diagram where people can put in their assumptions (number of concurrent human equivalent lives, length consciousness will be around) and have it generate a graph based on that?

I like that idea! I'd be happy to find $5k in retroactive funding for someone who makes a nice version of this (where what counts as 'nice' is judged by me). I'd also be happy to discuss upfront funding (including for larger amounts if it turns out that I'm miscalibrated about the amount of required work) – DM me if you're interested or know someone who may be a good fit for producing such an interactive graph.

You forgot to add one of my favorite infographics! ;)

I've been a big fan of your work for many years now, and I'm really glad you're taking a stab at explaining Longtermism! I remember being in school many years ago, before the EA movement was a thing, and trying to explain my intuitions around Longtermism to others and finding it difficult to communicate. I feel like we really need some introductory material which is helpful for building intuitions for a target of something like 4th grade reading level, to be approachable by a wider audience and by kids.

How did you make that graph? A Python library? It looks really nice!

More from Max Roser
44
Max Roser
· · 7m read
Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma