Over the last few years, I've noticed how bits and pieces of effective altruism have become mainstream. A couple weeks ago when I watched a YouTube video on my smartphone, there was an ad for the Beyond Burger available at A&W's across Canada. A&W's is one of the biggest fast food franchises in North America, and the Beyond Burger is a product from Beyond Meat, which has received support from the Good Food Institute, which in turn has received funding from the Open Philanthropy Project. This means effective altruism played a crucial role in the development of a consumer product that millions of people will be exposed to.
Artificial Intelligence (AI) developments make the headlines on a regular basis, especially regarding a coming age of automation looming in the near future. While concerns about existential risks from transformative AI are distinct from what issues regarding AI are most common in the public consciousness, whenever AI comes up in conversation I ask if people have heard about the AI safety concerns raised by public figures like Elon Musk, Bill Gates, and Stephen Hawking. Most people I talk to when I bring this up have heard about it, and have a positive as opposed to negative attitude toward the idea the development of AI should be managed to minimize the chances it poses threats to humanity's safety or security. This is all anecdotal, but in my everyday life interacting with people outside EA, I'm surprised by how many people have some level of awareness of AI safety. It's been at least a couple dozen people.
I imagine because charities focused on helping the poor in the developing the world are so common, among the general public awareness of global poverty alleviation efforts advocated by EA relative to other charitable work in the developing world is probably pretty low. But among my circles of friends also participating in social movements or intellectual communities, such as the rationality community, or a variety of political or activist movements, most acquaintances I meet and friends I meet locally have already heard of effective altruism, and generally have a positive impression of EA topics like effective giving, and organizations like Givewell.
While the phrase 'effective altruism' isn't on everyone's lips, it seems like a significant proportion of the whole population of Canada and the United States is aware of things done to improve the world that effective altruism played an early hand in making happen. Overall, in the last couple years, how much more I notice connections to EA in my everyday life, unrelated to EA, is much more common. I don't know if this predicts or not a spike in growth and awareness of EA among the general public in the near future. But I've found it very surprising just how noticeable the early successes of the EA movement so far by how far and wide things EA as a movement has had a hand in have impacted the world. Does anyone else have a similar experience?
I agree with Habryka's caution, but I've been starting to see some of the same effects Evan mentions. Specifically, after seeing an EA friend do the same, I set up an IFTTT rule (the link may not work for you, IFTTT restricts sharing) that finds all Tweets using terms like "effective altruism" or "effective altruists".
Each morning, I get an email with the day's Tweets. Many of them are content from EA orgs, but some reveal conversations happening in corners of the internet that seem quite separate from the broader "EA community".
Some of those conversations are negative, but most are positive; there is a slowly growing population of people who heard the term "effective altruism" at some point and now use it in conversations about giving without feeling the need to explain themselves. As our movement grows, this will have a lot of effects, good and bad, and it seems worth thinking about.
(If you decide to set up your own IFTTT rule for Twitter or anywhere else, my personal opinion is that it's better to avoid jumping into random conversations with strangers, especially if your goal is to "correct" a criticism they made. It won't work.)
Depending on the context, there could be many more people reading the conversation than the person who had the misconception. (IIRC, research into lurker:participant ratios in online conversations often comes up with numbers like 10:1 or 100:1.) If the misconception goes uncorrected then many more people could acquire it. I think correcting misconceptions online can be a really good use of time.