Hey everyone, I’m the producer of The 80,000 Hours Podcast, and a few years ago I interviewed AJ Jacobs on his writing, and experiments, and EA. And I said that my guess was that the best approach to making a high-impact TV show was something like: You make Mad Men — same level of writing, directing, and acting — but instead of Madison Avenue in the 1950-70s, it’s an Open Phil-like org.

So during COVID I wrote a pilot and series outline for a show called Bequest, and I ended up with something like that (in that the characters start an Open Phil-like org by the middle of the season, in a world where EA doesn't exist yet), combined with something like: Breaking Bad, but instead of raising money for his family, Walter White is earning to give. (That’s not especially close to the story, and not claiming it’s anywhere near that quality, but that’s the inspiration!)

My aim was to create a show that’s popular independent of the message, thinking that if folks are super engaged they'll naturally learn about core EA ideas — like how fans of Mad Men can’t help but learn a lot about advertising in the 1960s.

And then in big red letters in my mind I had the warning: “Don’t be preachy, it’s a massive turn-off”. So I decided against exploring core EA ideas until 4 or 5 episodes into a 10-episode show (where in a perfect world it’d end up getting multiple seasons).

Now, actually getting a high-quality TV show made feels close to impossible for any given idea / script, so you’re really just buying a ticket to a raffle. But with Bequest I had a brief glimmer of hope: some impressive folks liked the script and passed it on to industry connections. As far as I know, none of those influential people read it, but the initial interest was promising. I’d also thought that if a TV show seemed unrealistic, maybe I could turn it into a novel instead.

Then FTX imploded — and suddenly a show about a man who’d committed serious crimes and now wanted to donate $1B+ to effective causes seemed a lot less fun and exciting to EAs!

So I shelved it, knowing that even if we could press a button to have Vince Gilligan (the creator of Breaking Bad) make a brilliant version of Bequest for Netflix — there’d be plenty of people in the community who’d vote against that given the SBF scandal. And it’s not the kind of thing anyone should be excited about pushing forward unilaterally.

Anyway, flash forward to today: we’re releasing an 80k podcast episode I hosted with Elizabeth Cox, who founded an independent production company with a ~$2.5M grant from Open Phil. In our conversation, I use Bequest as an example of a totally different approach to doing good via storytelling than the one Elizabeth went with for her new show Ada — so I figured I might as well share the pilot script and the 10-episode outline here for anyone who’d be interested.

[Flagging that the pitch deck / series outline contains massive spoilers for the script — so if you’re up for reading both, I’d recommend starting with the script!]

I’ve lowered my goal slightly since the start of this project, from “make one of the best shows ever!” to “give more than 15 people an entertaining 45-minute read!” — would be great to hear from you if I’m closing in on the new target!

Pilot script link

Pitch deck / series outline link

Email: Keiran.J.Harris [at] gmail [dot] com

138

0
0

Reactions

0
0

More posts like this

Comments11


Sorted by Click to highlight new comments since:

The trailer for Ada makes me think it falls in a media no mans land between extremely low-cost, but potentially high-virality creator content and high-cost, fully produced series that go out on major networks. Interested to hear how Should We are navigating the (to me) inorganic nature of their approach.  

Sounds like Bequest was making a speculative bet on high-cost, fully produced – which I think is worthwhile. When I think about in-the-water ideas like environmentalism and social justice, my sense is they leveraged media by gently injecting their themes/ideas into independently engaging characters and stories (i.e. the kinds of things for-profit studios would want to produce independent of whether these ideas appeared in the plot). 

Less seriously, you might enjoy my 2022 April 1 post on Impact Island.

Oh wow just read the whole pilot! It's really cool! Definitely an angle on doing the most good that I did not expect.

That's so great to hear — really appreciate it!

I just wanted to say I like this idea

Thanks for sharing this! I really enjoyed the script and the pitch deck - I found the ideas really original and I think it would be exciting to watch. I hope you continue to write creatively because I think you have a real talent for it. 

Thank you so much Amber, what a lovely comment!

I would love to read it if I had the time. But I think you'd have more of an impact by getting NON-EA people to read it rather than people who are already on board?

Yeah I think that'd definitely be true if I had scripts for all 10 episodes, but the plan was to introduce EA ideas from episodes 4-10 — and so there isn't much to learn for anyone in the pilot. The goal was really just to make it as engaging as possible so people would come back for episode 2.

There is one page at the end of the pitch deck on doing good, but it's just a shorter version of this Effective altruism in a nutshell piece I wrote — so I think it'd be better to share that with non-EAs.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read