Hide table of contents

Summary

EA has an elitist image which may be putting people off the movement. I propose a podcast, Backyard EA, exploring how people can run independent projects that make a significant difference without earning-to-give or an EA career. The post ends with an appeal for feedback.

 

Motivation for the Podcast

  • Increased interest in EA careers is good for EA organizations, but it may have counterproductive side-effects
  • The EA movement’s focus on earning-to-give and EA careers has historically struggled to accommodate non-hyper-achievers
    • Those not earning-to-give or in an EA career feel left out or like they are “all talk”
    • EA podcasts typically feature guests at the top of the field: there is a gap for a show that celebrates the more “everyday” side of EA work
  • There is some great written advice on independent projects, but I can’t find video/audio content on the subject. This seems like a missed opportunity - some people report being overwhelmed by the amount of reading they feel they should be doing. 
  • Independent projects, even when unsuccessful, are an excellent way to build skills and experience, and therefore become more effective. Let’s celebrate that!
  • I have experience in running an interview podcast (~20 episodes) and feel that I have sufficient skills to make a good product

 

Proposal

This would be an interview podcast featuring one or more guests per episode. 

Possible episode types could be:

  • A tour through the lifecycle of an independent project, given by the person who ran it (Person X on her innovative university outreach idea // Person Y on how he used the effectuation model to build his malaria app project)
  • An EA expert giving their slant (The creator of the EA Forum on why contributing can be so powerful // A Longtermist on their favorite small EA projects that are helping the future, now! // An EA org founder on the early failures that made them a success // Someone from 80k hours on how to balance direct work with long-shot job applications )
  • Community edition: listeners share their latest project ideas

 

The podcast could grow into a larger community with:

  • online and in-person events
  • a website with write-ups of independent projects
  • a forum for seeking and giving feedback/assistance on projects

 

Potential Weaknesses

  • A low-quality show could give the EA movement a bad name
    • Untested, amateur guests are likely to be of inconsistent quality
  • The show could distract listeners from better EA media
  • Applying for EA jobs may come with a sufficiently high expected value that the podcast could be net-negative by distracting people from this
  • There may be better ways for me to maximize my priorities (increasing my immediate impact, personal growth & becoming more embedded in the EA community)

 

Any Feedback?

Give me your feedback on Backyard EA!

I am particularly interested in:

  • Feedback on the strongest/weakest arguments for making the show
  • Suggested changes to the show format
  • Ideas for research/thinking I can do to refine the project
  • Ideas for episodes & suggestions for guests
  • Advice on how to build an audience by tapping into EA networks

21

0
0

Reactions

0
0

More posts like this

Comments10


Sorted by Click to highlight new comments since:

Note that reality has an elitism problem - some people are much more able to affect the world than others. While this is sad, I think it's also true and worth acknowledging.

EA has an elitism problem on top of this, but I think it's a large chunk of it. 

Thanks for pointing this out. I agree, and I think we can trace the elitism in the movement to well-informed efforts to get the most from the human resources available.

While EA remains on the fringe we can keep thinking in terms of maximising marginal gains (ie only targeting an elite with the greatest potential for doing good). But as EA grows it is worth considering the consequences of maintaining such an approach :

  1. Focusing on EA jobs & earning-to-give will limit the size of the movement, as newcomers increasingly see no place for themselves in it
  2. With limited size comes limited scope for impact: eg you can't change things that require a democratic majority
  3. Even if 2) proves false, we probably don't want a future society run by a vaunted, narrow elite (at least based on past experience)

I think this is a lovely idea, I'd be very much in favour of you trying it out so we could all start listening and see how it works in practice!

As you point out in your post, this project would have the most impact if it encourages people who aren't likely to engage in earn-to-give or an effective career to pursue an independent project. As such, I'd suggest that it would be better not to tie the podcast too closely to the EA brand, or assume prior knowledge of EA. The title you're using at the moment would seem pretty confusing to anyone not already in the EA community.

I've been thinking quite a lot about something similar recently - not a podcast specifically, but a way to engage people outside the elitist circles that the EA movement tends to target in high impact projects. I'd love to chat to you directly if you're interested in pursuing this.

Yes, I like the idea of not using the words "effective altruism" in the title at all

Thanks for your thoughts. You make a good point - EA can be pretty alienating. There's a trade-off: Within the EA community there is a ready-made audience, probably lots of potential guests, less of a need to explain foundational concepts. But less potential impact, perhaps, as the podcast might only marginally help insiders to increase their impact.

Definitely open to a change in title.

I've sent you a message.

I feel like this proposal conflates two ideas that are not necessarily that related:

  1. Lots of people who want to do good in the world aren't easily able to earn-to-give or do direct work at an EA organization.
  2. Starting altruistically-motivated independent projects is plausibly good for the world.

I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider  instead or in addition having people on who work in high(ish)-impact jobs where there are currently labor shortages.

Overall, I think it would be better if you picked which of the two premises you're most excited about  and then went all-in on making the best podcast you could focused on that one.

Thanks, Ian. You make an excellent point: I don't want to unnecessarily narrow my focus here.

Perhaps I should focus on 1) because it also allows a broader scope of episode ideas. "How can ordinary people maximise the good they do in the world?" allows lots of different responses. Independent projects could be one of them.

On the other hand 2) seems more neglected. There's probably lots out there about startups or founding charities, but I can't find anything on running altruistic projects (except a few one-off posts).

This sounds like a great idea! I think it would have the benefit of empowering more people to do independent projects, because it will make the steps clearer, and humanize people who start them. It also reminds me a bit of this article: https://www.lesswrong.com/posts/DdDt5NXkfuxAnAvGJ/changing-the-world-through-slack-and-hobbies (which argues that free-time projects like hobbies or non-work interests can be very impactful).

Thanks, Amber. Great article.

Curated and popular this week
 ·  · 6m read
 · 
This post summarizes a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation. This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved. What is a rigorous RCT? There is no consensus, either in our field or between fields, about what counts as a valid, informative design, but we operationalize “rigorous RCT” as any study that: * Randomly assigns participants to a treatment and control group * Measures consumption directly -- rather than (or in addition to) attitudes, intentions, or hypothetical choices -- at least a single day after treatment begins * Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total. Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023. We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020. The main theoretical approaches: Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to cha
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra