Hide table of contents

For the next few weeks, all new EA Forum posts will have AI narrations.

We're releasing this feature as a pilot. We will collect feedback and then decide whether to keep the feature and/or roll it out more broadly (e.g. for our full post archive).

This project is run by TYPE III AUDIO in collaboration with the EA Forum team.

How can I listen?

On post pages

You'll find narrations on post pages; you can listen to them by clicking on the speaker icon:

On our podcast feeds

During the pilot, posts that get >125 karma will also be released on the "EA Forum (Curated and Popular)" podcast feed: 

EA Forum (Curated & Popular)
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.

Subscribe:
Apple Podcasts | Spotify | RSS | Google Podcasts (soon)

This feed was previously known as "EA Forum (All audio)". We renamed it for reasons. [1]

During the pilot phase, most "Curated" posts will still be narrated by Perrin Walker of TYPE III AUDIO.


Posts that get >30 karma will be released on the new "EA Forum (All audio)" feed:

EA Forum (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.

Subscribe:
Apple Podcasts | Spotify | RSS | Google Podcasts (soon)

How is this different from Nonlinear Library?

The Nonlinear Library has made unofficial AI narrations of EA Forum posts available for the last year or so.

The new EA Forum AI narration project can be thought of as "Nonlinear Library 2.0". We hope our AI narrations will be clearer and more engaging. Some specific improvements:

  • Audio notes to indicate headings, lists, images, etc.
  • Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.
  • We'll skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.
  • Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!

We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project. 

What do you think? 

We'd love to hear your thoughts!

To give feedback on a particular narration, click the feedback button on the audio player, or go to t3a.is

We're keen to hear about even minor issues: we have control over most details of the narration system, and we're keen to polish it. The narration system, which is being developed by TYPE III AUDIO, will be rolled out for thousands of hours of EA-relevant writing over the summer.[2]

To share feature ideas or more general feedback, comment on this post or write to eaforum@type3.audio.

  1. ^

    The reason for this mildly confusing update is the vast majority of people subscribed to the existing "All audio" feed, but we think that most of them don't actually want to receive ~4 episodes per day. If you're someone who wants to max out the number of narrations in your podcast app, please subscribe to the new "All audio" feed. For everyone else: no action required.

  2. ^

    Are you a writer with a blog, article or newsletter to narrate? Write to team@type3.audio and we'll make it happen.

Comments24


Sorted by Click to highlight new comments since:

Would you prefer a female narrator?

Sample (Sara, US).

Agree vote if "yes", disagree vote if "no".

Let each author decide?

Thanks. It's a nice idea. At some point we might enable authors (or listeners!) to select their favourite voices. This would increase our costs quite a lot (see my reply to Nathan) so I doubt we'll do this before end 2023, unless we find evidence of strong demand.

Randomise? Or different narrators for different topics?

Thanks! We looked into randomising between a couple voices a while ago. To my surprise, we found that all the voice models on our text-to-speech service (Microsoft Azure) perform somewhat differently. This means our quality assurance costs would go up quite a lot if we start using several voices.

I'd also guess that once listeners become familiar with a particular voice, their comprehension improves and they're able to listen faster. I have some anecdotal evidence of this, but I'm pretty unsure how big of a deal it is.

Maybe include female / UK as another reference point, so we're not comparing across two dimensions at once?

Thanks! I would have liked to do this, but in our quick tests the UK female voice models provided by our text-to-speech service (Microsoft Azure) were quite buggy. We frequently experiment with the latest voice models on Azure and other platforms, so I expect we'll find a good UK female option in the coming months.

Would you prefer an US English narrator?

Sample (Eric, US).

Agree vote if "yes", disagree vote if "no".

This is fantastic! Very excited to see work in this space as I much prefer audio to reading long posts. I use the apps NaturalReader and Speechify to read articles as I find it helps me stay more focused and one thing I love about them is the highlighting while reading. I suspect you're going for a different audience with this (i.e. people who want to listen to articles as podcasts without following along with the text) but just thought I'd flag this as something I've found useful.

Great work all!

Tentative suggestion: Maybe try to find a way include info about how much karma the post has near the start of the episode description, in the podcast feed?

Reasoning:

  • This could help in deciding what to listen to, at least for the "all audio" feed. (E.g. I definitely don't have time for even just all AI-related episodes in there.) 
  • It could also led to herd-like behavior or ignoring good content that didn't get lots of karma right away. But I think that that is outweighed by the above benefit.
  • OTOH this may just be infeasible to do in a non-misleading way, if you put things in the feed soon enough after they're posted that the karma hasn't really stabilized yet* and if it's hard to automatically update the description to reflect karma scores later.
    • *My rough sense is that karma scores are pretty stable after something like 3-7 days - stable enough that something like "karma after 5 days was y" is useful info - but that if you can only show karma scores after e.g. 1 day then that wouldn't be very informative. 

Thanks Michael, karma and author name do seem reasonable to add if we can easily keep episodes up to date from a technical perspective. Will put this on our list and work out how to prioritize it.

Thanks! This seems valuable.

One suggestion: Could the episode titles, or at least the start of the descriptions, say who the author is? 

Reasoning:

  • I think that's often useful context for the post, and also useful info for deciding whether to read it (esp. for the feed where the bar is "just" >30 karma). 
  • I guess there are some upsides to nudging people to decide just based on topic or the start of the episode rather than based on the author's identity. But I think that's outweighed by the above points.

Thanks Michael! This was a strange oversight on our part—now fixed.

I personally find this very valuable - thanks for the work here. As text to voice gets better, I expect to it increasingly valuable

This is fantastic! Props to the Type 3 Audio and EA Forum team.

Quick question regarding accessibility:

I'm aware EA Forum posts can both:

  • Customize the alt text of an image
  • Provide captions to an image

Is this information included in these audio narrations?

Great question. If authors include image captions we read them, but I think we're skipping the image alt texts at the moment. We actually wrote the code to read alt texts but I think we forgot to ship it in this first release. This was a mistake on our part—we'll fix it this week or next.

Wait nonlinear doesn't link back to original posts? That feels very bad to me.

Sadly, this feature turned out to be quite the technical challenge.

Yeah. We do say at the beginning of every episode the title, author, and where to find it, and it's in the show notes, but not a link. 

It does have a link on the sub-channels on Spotify, because for some weird arcane technical reasons, that was fine. 

In google podcast it has a working link too.

oh, so the URL is provided, but not a link? I'm surprised people care about the distinction.

The URL only for some of the sub channels on some of the platforms. But always the title, author, and source.

I would love to use the RSS feeds to listen to the posts via my favorite podcatcher, but: The RSS-feed for both (Curated & Popular/ All) linked above is identical. Can you fix that? Thank you :)

Hi Lia. I think the RSS links above are correct.

To confirm, the RSS links are as follows:

Does this help?

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f