Hide table of contents

Summary

EA has an elitist image which may be putting people off the movement. I propose a podcast, Backyard EA, exploring how people can run independent projects that make a significant difference without earning-to-give or an EA career. The post ends with an appeal for feedback.

 

Motivation for the Podcast

  • Increased interest in EA careers is good for EA organizations, but it may have counterproductive side-effects
  • The EA movement’s focus on earning-to-give and EA careers has historically struggled to accommodate non-hyper-achievers
    • Those not earning-to-give or in an EA career feel left out or like they are “all talk”
    • EA podcasts typically feature guests at the top of the field: there is a gap for a show that celebrates the more “everyday” side of EA work
  • There is some great written advice on independent projects, but I can’t find video/audio content on the subject. This seems like a missed opportunity - some people report being overwhelmed by the amount of reading they feel they should be doing. 
  • Independent projects, even when unsuccessful, are an excellent way to build skills and experience, and therefore become more effective. Let’s celebrate that!
  • I have experience in running an interview podcast (~20 episodes) and feel that I have sufficient skills to make a good product

 

Proposal

This would be an interview podcast featuring one or more guests per episode. 

Possible episode types could be:

  • A tour through the lifecycle of an independent project, given by the person who ran it (Person X on her innovative university outreach idea // Person Y on how he used the effectuation model to build his malaria app project)
  • An EA expert giving their slant (The creator of the EA Forum on why contributing can be so powerful // A Longtermist on their favorite small EA projects that are helping the future, now! // An EA org founder on the early failures that made them a success // Someone from 80k hours on how to balance direct work with long-shot job applications )
  • Community edition: listeners share their latest project ideas

 

The podcast could grow into a larger community with:

  • online and in-person events
  • a website with write-ups of independent projects
  • a forum for seeking and giving feedback/assistance on projects

 

Potential Weaknesses

  • A low-quality show could give the EA movement a bad name
    • Untested, amateur guests are likely to be of inconsistent quality
  • The show could distract listeners from better EA media
  • Applying for EA jobs may come with a sufficiently high expected value that the podcast could be net-negative by distracting people from this
  • There may be better ways for me to maximize my priorities (increasing my immediate impact, personal growth & becoming more embedded in the EA community)

 

Any Feedback?

Give me your feedback on Backyard EA!

I am particularly interested in:

  • Feedback on the strongest/weakest arguments for making the show
  • Suggested changes to the show format
  • Ideas for research/thinking I can do to refine the project
  • Ideas for episodes & suggestions for guests
  • Advice on how to build an audience by tapping into EA networks

21

0
0

Reactions

0
0

More posts like this

Comments10


Sorted by Click to highlight new comments since:

Note that reality has an elitism problem - some people are much more able to affect the world than others. While this is sad, I think it's also true and worth acknowledging.

EA has an elitism problem on top of this, but I think it's a large chunk of it. 

Thanks for pointing this out. I agree, and I think we can trace the elitism in the movement to well-informed efforts to get the most from the human resources available.

While EA remains on the fringe we can keep thinking in terms of maximising marginal gains (ie only targeting an elite with the greatest potential for doing good). But as EA grows it is worth considering the consequences of maintaining such an approach :

  1. Focusing on EA jobs & earning-to-give will limit the size of the movement, as newcomers increasingly see no place for themselves in it
  2. With limited size comes limited scope for impact: eg you can't change things that require a democratic majority
  3. Even if 2) proves false, we probably don't want a future society run by a vaunted, narrow elite (at least based on past experience)

I think this is a lovely idea, I'd be very much in favour of you trying it out so we could all start listening and see how it works in practice!

As you point out in your post, this project would have the most impact if it encourages people who aren't likely to engage in earn-to-give or an effective career to pursue an independent project. As such, I'd suggest that it would be better not to tie the podcast too closely to the EA brand, or assume prior knowledge of EA. The title you're using at the moment would seem pretty confusing to anyone not already in the EA community.

I've been thinking quite a lot about something similar recently - not a podcast specifically, but a way to engage people outside the elitist circles that the EA movement tends to target in high impact projects. I'd love to chat to you directly if you're interested in pursuing this.

Yes, I like the idea of not using the words "effective altruism" in the title at all

Thanks for your thoughts. You make a good point - EA can be pretty alienating. There's a trade-off: Within the EA community there is a ready-made audience, probably lots of potential guests, less of a need to explain foundational concepts. But less potential impact, perhaps, as the podcast might only marginally help insiders to increase their impact.

Definitely open to a change in title.

I've sent you a message.

I feel like this proposal conflates two ideas that are not necessarily that related:

  1. Lots of people who want to do good in the world aren't easily able to earn-to-give or do direct work at an EA organization.
  2. Starting altruistically-motivated independent projects is plausibly good for the world.

I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider  instead or in addition having people on who work in high(ish)-impact jobs where there are currently labor shortages.

Overall, I think it would be better if you picked which of the two premises you're most excited about  and then went all-in on making the best podcast you could focused on that one.

Thanks, Ian. You make an excellent point: I don't want to unnecessarily narrow my focus here.

Perhaps I should focus on 1) because it also allows a broader scope of episode ideas. "How can ordinary people maximise the good they do in the world?" allows lots of different responses. Independent projects could be one of them.

On the other hand 2) seems more neglected. There's probably lots out there about startups or founding charities, but I can't find anything on running altruistic projects (except a few one-off posts).

This sounds like a great idea! I think it would have the benefit of empowering more people to do independent projects, because it will make the steps clearer, and humanize people who start them. It also reminds me a bit of this article: https://www.lesswrong.com/posts/DdDt5NXkfuxAnAvGJ/changing-the-world-through-slack-and-hobbies (which argues that free-time projects like hobbies or non-work interests can be very impactful).

Thanks, Amber. Great article.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
michel
 ·  · 4m read
 · 
I'm writing this in my personal capacity as someone who recently began working at the Tarbell Center for AI Journalism—my colleagues might see things differently and haven’t reviewed this post.  The rapid development of artificial intelligence could prove to be one of the most consequential technological transitions in human history. As we approach what may be a critical period in AI development, we face urgent questions about governance, safety standards, and the concentration of power in AI development. Yet many in the general public are not aware of the speed of AI development, nor the implications powerful AI models could have for them or society at large. Society’s capacity to carefully examine and publicly debate AI issues lags far behind their importance.  This post makes the case for why journalism on AI is an important yet neglected path to remedy this situation. AI journalism has a lot of potential I see a variety of ways that AI journalism can helpfully steer AI development.  Journalists can scrutinize proposed regulations and safety standards, helping improve policies by informing relevant actors of potential issues or oversights. * Example article: Euractiv’s detailed coverage of the EU AI Act negotiations helped policymakers understand the importance of not excluding general-purpose models from the act, which may have played a key role in shaping the final text.[1] * Example future opportunity: As individual US states draft AI regulations in 2025, careful analysis of proposed bills could help prevent harmful loopholes or unintended consequences before they become law. Journalists can surface (or amplify) current and future risks, providing activation energy for policymakers and other actors to address these risks.  * Example article: 404 Media's investigation revealing an a16z-funded AI platform's generation of images that “could be categorized as child pornography” led a cloud computing provider to terminate its relationship with the platf
Relevant opportunities