In 1860, Walt Whitman addressed future generations with his poem "Crossing Brooklyn Ferry". On the shores Brooklyn, he feels the same reality as "men and women of a generation, or ever so many generations hence," and he knows it:

[...] I am with you,

Just as you feel when you look on the river and sky, so I felt, 
Just as any of you is one of a living crowd, I was one of a crowd, 
Just as you are refresh’d by the gladness of the river and the bright flow, I was refresh’d, 

What thought you have of me now, I had as much of you—I laid in my stores in advance, 

I consider’d long and seriously of you before you were born. [...]

I first heard this poem in Joe Carlsmith's essay "On future people, looking back on 21st century longtermism."I loved it. I happened to be going to New York a few weeks later, and I happen to enjoy making little videos. 

So, I made a video complementing Walt Whitman's poem with scenes from my Brooklyn visit, a 160 years later. 

 

If you like this video or the poem, I recommend reading Joe Carlsmith's whole essay

Here's the section where Joe reacts to Walt Whitman's poem, with longtermism and the idea of "shared reality" in mind: 

It feels like Whitman is living, and writing, with future people — including, in some sense, myself — very directly in mind. He’s saying to his readers: I was alive. You too are alive. We are alive together, with mere time as the distance. I am speaking to you. You are listening to me. I am looking at you. You are looking at me. 

If the basic longtermist empirical narrative sketched above is correct, and our descendants go on to do profoundly good things on cosmic scales, I have some hope they might feel something like this sense of “shared reality” with longtermists in the centuries following the industrial revolution — as well as with many others, in different ways, throughout human history, who looked to the entire future, and thought of what might be possible. 

In particular, I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary — that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection."

Thanks to Joe Carlsmith for letting me use his audio, and for writing his essay. Thanks to Lara Thurnherr and Finn Hambley for early feedback on the video.

76

0
0
12

Reactions

0
0
12
Comments4


Sorted by Click to highlight new comments since:

I often find it quite hard to emotionally connect with longtermist ideas despite seeing their rational appeal. This was helpful, and sweet. Thank you for sharing.

Beautiful and inspiring. Thanks for sharing this.

I hope more EAs think about turning abstract longtermist ideas into more emotionally compelling media!

Thanks for making this, Michel :)

I have a few tears birthing in my eyes. The video added a touch of liveliness that moved me more than the excerpt you shared in text. Thank you very much !

Curated and popular this week
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe