Hide table of contents

I am excited to announce the sequel to the recently announced book What We Owe The Future. This new book will be called What The Future Owes Us, and will focus primarily on the common criticism “why should I do anything for the future if they won’t do anything for me?”

The first half of the book dives into ways in which the future can do things for us. Sometimes this is in straightforward ways, such as carrying forward our values and continuing our legacies. But it will be argued that there are much more impactful and direct ways the future can do things for us via acausal trade and via the future’s acausal influence over the past (our present). Due to the vast number of potential people in the future, we expect that they will be able to exert a large amount of acausal control over the past (the specifics of this argument will be fleshed out by the future author). 

The second half of the book looks at ways we can ensure the future acts in the best interest of the past, just as we have acted in the best interests of them. We will investigate commitment devices, and novel metaphysics which may allow this to be possible. Ultimately it will be argued that we in the present can control the future, which can in turn have a positive impact on us today.

 Hopefully, this book will bring more moral egoists to longtermism, because the best way to help oneself may be to help the future.

The book is not up for preorder yet, but we do have cover designs.

US Version
UK Version


 

Comments10


Sorted by Click to highlight new comments since:

I love this, haha.

But, as with many things, J.S. Mill did this meme first!!! 

In the Houses of Parliament on April 17th, 1866, he gave a speech arguing that we should keep coal in the ground (!!). As part of that speech, he said:
 

I beg permission to press upon the House the duty of taking these things into serious consideration, in the name of that dutiful concern for posterity [...] There are many persons in the world, and there may possibly be some in this House, though I should be sorry to think so, who are not unwilling to ask themselves, in the words of the old jest, "Why should we sacrifice anything for posterity; what has posterity done for us?"

They think that posterity has done nothing for them: but that is a great mistake. Whatever has been done for mankind by the idea of posterity; whatever has been done for mankind by philanthropic concern for posterity, by a conscientious sense of duty to posterity [...] all this we owe to posterity, and all this it is our duty to the best of our limited ability to repay."

all great deeds [and] all [of] culture itself [...] all this is ours because those who preceded us have cared, and have taken thought, for posterity [...] Not owe anything to posterity, Sir! We owe to it Bacon, and Newton, and Locke, and Bentham; aye, and Shakespeare, and Milton, and Wordsworth."

Huge H/T to Tom Moynihan for sending this to me back in December. Interestingly, in the 1860s there seems to have been a bit of a wave of longtermist thought among the utilitarians, though their empirical views about the amount of available coal were way off.

Let's fulfil Mill's wishes by buying some coal mines.

The future's ability to affect the past is truly a crucial consideration for those with high discount rates. You may doubt whether such acausal effects are possible, but in expectation, on e.g. an ultra-neartermist view, even a 10^-100 probability that it works is enough, since anything that happened 100 years ago is >>10^1000 times as important as today is, with an 80%/day discount rate.

Indeed, if we take the MEC approach to moral uncertainty, we can see that this possibility of ultra-neartermism + past influence will dominate our actions for any reasonable credences. Perhaps the future can contain 10^40 lives, but that pales in comparison to the >>10^1000 multiplier we can get by potentially influencing the past.

I pre-ordered this next year and fully agree with Stephen Fry. So far, future people seem more caught up in the theory. I’m disappointed that we’re not seeing a lot of direct work from them yet, but I have some hope this book will move the needle.

How much overlap is there between this book & Singer's forthcoming What We Owe The Past?

This is truly spectacular. By far the post of the day.

Building on pioneering work on 'retrocausality' by Huw Price in Time's Arrow and Archimedes Point: New Directions for the Physics of Time.

Wait, isn't this just a loan (or debt or promise)? Or, put another way, isn't every concept of "owe" implicitly one where the future owes us? If I lend $10 to Bob, future Bob owes me $10. 🤨
 

Hah!

I think it's worth discussing the straight answer to this, though: The future gives back simply by creating many of the things that I want to exist, which is a class of service that encompasses most of my values (I think).
This illuminates an interesting and surprising fact: Not all trade requires an exchange of physical objects, or even information. It is, in some cases, possible to evidence that something will occur, without ever entirely confirming it, which we will later find to be a foundational resolution in inter-universal moral trade schemes

This one is great

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Community
46
Ivan Burduk
· · 2m read