Considering the current trajectory of the EA community, we identified a number of key conclusions that we believe .impact is well-placed to address:

 
  1. As the community grows, it’ll likely become increasingly important to communicate EA ideas clearly and avoid inaccurate messaging.

  2. Community builders (local groups included) would benefit from a greater variety of resources and a higher standard in terms of shareability, digestibility and appeal.

  3. The success of local groups is highly dependent on the appeal of the group leader.

  4. The median EA donation ($330) was pretty low. There could be various reasons for this, but we can only really pin down an explanation when .impact conduct the next EA Survey. If EAs think they should donate more but don’t, there could be a fundamental disconnect between belief and action. Do we still need new incentives or additional prompting  to donate?

 

Community Building and Coordination

 

Many EAs have come across the movement via articles, but those that would be put off by lengthy text have few alternative opportunities to encounter effective altruism. The instances where we have branched out into various media and outreach have proved valuable in driving more people towards the community, e.g. Peter Singer’s TED talk, the Sam Harris podcast with Will MacAskill, EAG, the pledge drive and so on.

 

There is an abundance of existing content that .impact intends to refine into concise, enjoyable forms of media, making the information clearer whilst being careful to not oversimplify it. We will focus on making resources that allow the community to easily share EA ideas via engaging and digestible material.

 

These resources will also aid the Local EA Network. With LEAN, we have seen that a local group is more likely to succeed with a charismatic group leader. However, this combination of charisma, enthusiasm about EA, and willingness to put in the work, is scarce. LEAN, therefore, gives guidance on how to lead, what kind of events to have, and how to have an impact, so the (sustained) success of a group is no longer solely dependent on an individual’s character. In other words, we give them the training to succeed. Likewise, videos or other resources that introduce EA in a fun and appealing way can function as a supplement where charisma is lacking.

 

As the community grows, it’s becoming more important to have a clear message and avoid miscommunication. For example, we need to ensure that group leaders are up to speed on the key concepts of EA, and that it’s easy to learn the necessary points via engaging resources:

 
  • Engaging, digestible videos: explaining key concepts, transferring charisma, circumventing jargon, and shortening the learning experience without oversimplifying.

  • Podcasts or interviews: this may involve reaching out to existing popular podcasts or creating our own, depending on further research.

  • Infographics, memes, handouts: creating explanatory resources for newcomers (particularly useful for LEAN groups to distribute, and something they requested in our LEAN survey).

  • Resources for groups to measure impact: feedback forms, tips on useful metrics and so on.

  • An interactive EA flowchart to navigate the various resources, so users can direct their own experience of learning about EA; the learning experience would be modifiable by time and form (3-second meme, 3-minute video, 10 minute article etc.) and by intention (introductory learning, short action, in-depth learning, long-term commitment, etc); this also serves as a means of determining which resources are most popular with particular audiences via behavioural analytics.

 

The interactive EA flowchart is a good example of our ethos: we want to create a fun learning experience that appeals to a variety of users and eases the process of communicating about EA, while internally collecting data from behavioural analytics, which can influence our future strategy.

 

Our move to create and distribute high-quality resources represents a long-term approach to a) involve more people in the community and b) strengthen the commitment of existing community members.

 
 

Impact Missions, Peer-to-Peer Fundraising and Matching Donations

 

As a means of increasing and coordinating the impact of the EA community, we will be leading Impact Missions throughout the year.

 

These Impact Missions could take the form of anything from everyone working together to change a particular policy, to coordinating an effort to translate EA materials into non-English languages. The intention is to make waves through a concentrated, coordinated effort. Our peer-to-peer fundraisers are one iteration of this.

 

As part of our immediate impact, .impact is taking over a project that was previously (very successfully) run by Charity Science: peer-to-peer fundraising campaigns. There are two main reasons for this:

 
  • .impact has great potential to develop these fundraising activities, particularly given that we support ~300 Local Groups and a growing network of SHIC Clubs.

  • Charity Science has decided to focus on their direct poverty project, Charity Science Health.

 

We intend to provide platforms for fundraising as well as coming up with fun and innovative campaigns. We will encourage both LEAN and SHIC groups to take part, thereby:

 
  • Increasing people’s commitment to their respective local groups and EA as a whole.

  • Creating an opportunity to bond as a group, and to learn from other EA groups.

  • Giving groups an active dimension, and providing an alternative to lecture/discussion meetups.

  • Drawing in a new crowd; a proportion of whom may only take part in the fundraiser, while some will likely go on to further engage with EA.

  • Creating at least one clear metric by which groups can measure their success, and hopefully nudging them towards more data collection (something we want to encourage regardless).

 

Our first fundraiser and Impact Mission will be a peer-to-peer Winter fundraiser, ‘Season’s Givings’. We are currently gathering matching funds for this project, and are seeking people who have (or would like) experience with fundraising; please do contact us if you’re able to help us make progress with either of these goals.

 

Donate to .impact

 

We are currently fundraising for our 2017 operations. If you are interested in helping .impact continue and scale, please get in touch.

 

Email: georgiedotimpact@gmail.com

Chat: calendly.com/georgiedotimpact

 

See .impact update 1 of 3 here, and 2 of 3 here.

Comments9


Sorted by Click to highlight new comments since:
[anonymous]4
0
0

The instances where we have branched out into various media and outreach have proved valuable in driving more people towards the community, e.g. Peter Singer’s TED talk, the Sam Harris podcast with Will MacAskill, EAG, the pledge drive and so on.

The examples here are the best outcomes that were generated by people who spent quite a bit of time developing a following. I don't think they're representative of what media-based outreach looks like on average.

As some useful data points: CEA isn't currently trying to promote EA through media outreach expect in cases where a) the audience is large and promising and b) we have access to a platform that lets us dig into the issues in depth (e.g. podcasts). This is because we've consistently failed to see much of a return from mass-media style stories about EA and are worried about putting EA in front of a large audience where we can't dig into the ideas in depth.

Since I've been at CEA we launched a major media campaign around Will's book with mixed results (unclear if it was worth it) and attempts to promote GWWC through media outreach don't seem to have been particularly successful. This mirrors my experience in my previous job where we worked with multiple outside PR firms on projects with little to show for it.

We don't plan on doing any mass media. I can see how the bit you quoted might be related to mass media, but hopefully the rest of the post clarifies that our focus will be on resources for LEAN, since our LEAN survey showed significant demand for this.

The median EA donation ($330) was pretty low. There could be various reasons for this, but we can only really pin down an explanation when .impact conduct the next EA Survey. I

According to the reports, the first survey of 2014 (ie reported in 2015) found a median donation of $450 in 2013, with 766 people reporting their donations.

The next survey of 2015 (ie reported 2106) found a mediant donation of $330 in 2014, with 1341 people reporting their donations.

Repeating the survey has gathered more data and actually produced a lower estimate. I'm interested how the third survey will help understand this better?

Me too! We're in the process of creating the survey now and will be distributing it in January. This is one thing we're going to address, and if you have suggestions about specific questions, we'd be interested in hearing them.

Unless you have a specific hypothesis that you are testing, I think the survey is the wrong methodology to answer this question. If you actually want to explore the reasons why (and expect there will not be a single answer) then you need qualitative research.

If you do pursue questions on this topic in a survey format, it is likely you will get misleading answers unless you have the resources to very rigorously test and refine your question methodology. Since you will essentially be asking people if they are not doing something they have said is good to do, there will be all sorts of biases as play, and it will be very difficult to write questions that function the way you expect them to. To the best of my knowledge question testing didn't happen at all with the first survey, I don't know if any happened with the second.

I appreciate the survey uses a vast amount of people's resources, and is done for good reasons. I hate sounding like a doom-monger, but there are pitfalls here and significant limitations on surveys as a research method. I think the EA community risks falling into a trap on this topic, thinking dubious data is better than none, when actually false data can literally costs lives. As previously, I would strongly suggest getting professional involvement.

Ah sorry Bernadette I misunderstood your first question!

I think 'pin down an explanation' was probably too strong on my part, because I definitely don't think it'd be conclusive and I do hope that we have some more qualitative research into this.

We do have professionals working on the survey this year (is that what you meant by professional involvement?) and I've sent your comment to them. They're far better placed to analyze this than me!

Thanks Georgie - I see where we were misunderstanding each other! That's great - research like this is quite hard to get right, and I think it's an excellent plan to have people with experience and knowledge about the design and execution as well as analysis involved. (My background is medical research as well as clinical medicine, and a depressing amount of research - including randomised clinical trials - is never able to answer the important question because of fundamental design choices. Unfortunately knowing this fact isn't enough to avoid the pitfalls. It's great that EA is interested in data, but it's vital we generate and analyse good data well.)

Please include a question about race. At the Effective Animal Advocacy Symposium this past weekend at Princeton, the 2015 EA Survey was specifically called out for neglecting to ask a question about the race of the respondents.

Thanks Eric, we spoke to Garrett about this too :)

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
michel
 ·  · 4m read
 · 
I'm writing this in my personal capacity as someone who recently began working at the Tarbell Center for AI Journalism—my colleagues might see things differently and haven’t reviewed this post.  The rapid development of artificial intelligence could prove to be one of the most consequential technological transitions in human history. As we approach what may be a critical period in AI development, we face urgent questions about governance, safety standards, and the concentration of power in AI development. Yet many in the general public are not aware of the speed of AI development, nor the implications powerful AI models could have for them or society at large. Society’s capacity to carefully examine and publicly debate AI issues lags far behind their importance.  This post makes the case for why journalism on AI is an important yet neglected path to remedy this situation. AI journalism has a lot of potential I see a variety of ways that AI journalism can helpfully steer AI development.  Journalists can scrutinize proposed regulations and safety standards, helping improve policies by informing relevant actors of potential issues or oversights. * Example article: Euractiv’s detailed coverage of the EU AI Act negotiations helped policymakers understand the importance of not excluding general-purpose models from the act, which may have played a key role in shaping the final text.[1] * Example future opportunity: As individual US states draft AI regulations in 2025, careful analysis of proposed bills could help prevent harmful loopholes or unintended consequences before they become law. Journalists can surface (or amplify) current and future risks, providing activation energy for policymakers and other actors to address these risks.  * Example article: 404 Media's investigation revealing an a16z-funded AI platform's generation of images that “could be categorized as child pornography” led a cloud computing provider to terminate its relationship with the platf