Over the last few years, I've noticed how bits and pieces of effective altruism have become mainstream. A couple weeks ago when I watched a YouTube video on my smartphone, there was an ad for the Beyond Burger available at A&W's across Canada. A&W's is one of the biggest fast food franchises in North America, and the Beyond Burger is a product from Beyond Meat, which has received support from the Good Food Institute, which in turn has received funding from the Open Philanthropy Project. This means effective altruism played a crucial role in the development of a consumer product that millions of people will be exposed to.

Artificial Intelligence (AI) developments make the headlines on a regular basis, especially regarding a coming age of automation looming in the near future. While concerns about existential risks from transformative AI are distinct from what issues regarding AI are most common in the public consciousness, whenever AI comes up in conversation I ask if people have heard about the AI safety concerns raised by public figures like Elon Musk, Bill Gates, and Stephen Hawking. Most people I talk to when I bring this up have heard about it, and have a positive as opposed to negative attitude toward the idea the development of AI should be managed to minimize the chances it poses threats to humanity's safety or security. This is all anecdotal, but in my everyday life interacting with people outside EA, I'm surprised by how many people have some level of awareness of AI safety. It's been at least a couple dozen people.

I imagine because charities focused on helping the poor in the developing the world are so common, among the general public awareness of global poverty alleviation efforts advocated by EA relative to other charitable work in the developing world is probably pretty low. But among my circles of friends also participating in social movements or intellectual communities, such as the rationality community, or a variety of political or activist movements, most acquaintances I meet and friends I meet locally have already heard of effective altruism, and generally have a positive impression of EA topics like effective giving, and organizations like Givewell.

While the phrase 'effective altruism' isn't on everyone's lips, it seems like a significant proportion of the whole population of Canada and the United States is aware of things done to improve the world that effective altruism played an early hand in making happen. Overall, in the last couple years, how much more I notice connections to EA in my everyday life, unrelated to EA, is much more common. I don't know if this predicts or not a spike in growth and awareness of EA among the general public in the near future. But I've found it very surprising just how noticeable the early successes of the EA movement so far by how far and wide things EA as a movement has had a hand in have impacted the world. Does anyone else have a similar experience?

9

0
0

Reactions

0
0

More posts like this

Comments16
Sorted by Click to highlight new comments since: Today at 8:26 AM

While I do think EA has been spreading, I do want to caution against generalizing from your personal social network to the broad population. As Scott Alexander put it:

According to Gallup polls, about 46% of Americans are creationists. Not just in the sense of believing God helped guide evolution. I mean they think evolution is a vile atheist lie and God created humans exactly as they exist right now. That’s half the country.
And I don’t have a single one of those people in my social circle. It’s not because I’m deliberately avoiding them; I’m pretty live-and-let-live politically, I wouldn’t ostracize someone just for some weird beliefs. And yet, even though I probably know about a hundred fifty people, I am pretty confident that not one of them is creationist. Odds of this happening by chance? 1/2^150 = 1/10^45 = approximately the chance of picking a particular atom if you are randomly selecting among all the atoms on Earth.
About forty percent of Americans want to ban gay marriage. I think if I really stretch it, maybe ten of my top hundred fifty friends might fall into this group. This is less astronomically unlikely; the odds are a mere one to one hundred quintillion against.
People like to talk about social bubbles, but that doesn’t even begin to cover one hundred quintillion. The only metaphor that seems really appropriate is the bizarre dark matter world.
I live in a Republican congressional district in a state with a Republican governor. The conservatives are definitely out there. They drive on the same roads as I do, live in the same neighborhoods. But they might as well be made of dark matter. I never meet them.

Filter bubbles are really strong. You are probably astronomically more likely to meet people who might have heard of Effective Altruism than the baseline suggests.

In the examples I was talking about, it was ads in one of the biggest fast food franchises in the country, and the random people I talk to about AI safety are at bus stops and airports. This isn't just from my social network.Like I said, it's only a lot of people in my social network who have heard the words 'effective altruism,' or know what they refer to. I was mostly talking about the things EA has impacted, like AI safety and the Beyond Burger, receiving a lot of public attention, even if EA doesn't receive credit. I took the outcomes of EA receiving attention to be a sign of steps toward the movement's goals as a good thing without regard to whether people have heard of EA.

Nice. And even more so if you broaden the definition of EAs to include people who would have been EAs now if EA material had been available at their college eg. older people and mathematically inclined Quakers and Universalist Unitarians.

I agree with Habryka's caution, but I've been starting to see some of the same effects Evan mentions. Specifically, after seeing an EA friend do the same, I set up an IFTTT rule (the link may not work for you, IFTTT restricts sharing) that finds all Tweets using terms like "effective altruism" or "effective altruists".

Each morning, I get an email with the day's Tweets. Many of them are content from EA orgs, but some reveal conversations happening in corners of the internet that seem quite separate from the broader "EA community".

Some of those conversations are negative, but most are positive; there is a slowly growing population of people who heard the term "effective altruism" at some point and now use it in conversations about giving without feeling the need to explain themselves. As our movement grows, this will have a lot of effects, good and bad, and it seems worth thinking about.

(If you decide to set up your own IFTTT rule for Twitter or anywhere else, my personal opinion is that it's better to avoid jumping into random conversations with strangers, especially if your goal is to "correct" a criticism they made. It won't work.)

(If you decide to set up your own IFTTT rule for Twitter or anywhere else, my personal opinion is that it's better to avoid jumping into random conversations with strangers, especially if your goal is to "correct" a criticism they made. It won't work.)

Depending on the context, there could be many more people reading the conversation than the person who had the misconception. (IIRC, research into lurker:participant ratios in online conversations often comes up with numbers like 10:1 or 100:1.) If the misconception goes uncorrected then many more people could acquire it. I think correcting misconceptions online can be a really good use of time.

Do you have rough data on quantity of tweets over time?

I've only been doing this for a few weeks, so not yet. I'm archiving all the emails I get, so eventually I should have a reasonable trend estimate. I've set a reminder to check in on this in six months.

It should be possible to scrape from twitter for earlier dates.

From my feedly since July 13th 2018 there have been at least 1650 tweets with the phrase"effective altruism" and 128 with the phrase "effective altruist".

As a side note it seems that there are a higher proportion of negative tweets with "effective altruist" than "effective altruism".

I went back and looked at an earlier Feedly I had setup from 18th November 2015 until 3rd March 2016 and there were 2123 mentions of "effective altruism" which is over 106 days compared to 130 days in the current example.

I have a suspicion that a few tweets get cut off from my current Feedly which might be one reason it seems to be lower, it could also be that there was a bigger media push in 2015/2016.

I think with EAA waves have been made in quite a depoliticised way. We can point to how GFI has supported investment and promoted products, but we can also look to the costs of this general approach. Going "mainstream" often seems to mean that we are adopting and replicating the characteristics of that mainstream and nudging within it (or just aligning with it) rather than challenging it. This has informed much of effective altruism and how donations are made to larger organisations, particularly as issues of rights, anti-speciesism and veganism have been considered and often pushed aside. For instance, i doubt there are many rights advocates in the ACE top charities, or generally associated with effective altruism. Those perspectives are largely missing throughout EAA and neither are they sought out or particularly welcome as far as i can tell.

The emphasis for me has been a race to make short term gains whilst medium to longer term projects have been marginalised or just not considered in favour of approaches aligned to dominant ideologies around welfarism and "pragmatism". Particularly associated with Bruce Friedrich, Paul Shapiro, Nick Cooney, Matt Ball and favoured by Peter Singer.

Another concern is how effective altruism continues to break issues down between individualism (or atomisation) and corporate campaigning from organisational perspectives, something which overlooks the nature of the general animal movement. I'm pleased that plant based burgers are more readily available these days, but this is perhaps not so much due to GFI but more to do with how people have helped promote them generally.

We can find positive things to consider about effective altruism, but there is a tendency to overlook some underlying issues which are important to think about in terms of a more complex form of effectiveness, and it is rare to see these types of issues considered and discussed. Perhaps not least because EAA has become somewhat distorted by a mainstream it has attempted to engage and influence.

Could you give an example of this point?

The emphasis for me has been a race to make short term gains whilst medium to longer term projects have been marginalised or just not considered in favour of approaches aligned to dominant ideologies around welfarism and "pragmatism".

My strong impression is that longer-term projects have become a much greater priority for funding over the last few years, in that EA organizations have focused more on research and community-building (projects with low short-term return) than on collecting donations and trying to appear in the media.

I may have a different idea of what constitutes "short-term gains", especially since I don't see why they would be inherently opposed to pragmatism, and would be curious to hear how you define the term // what specific events make you think this trend exists.

The emphasis for me has been a race to make short term gains whilst medium to longer term projects have been marginalised or just not considered

ACE recently did an analysis of how resources are allocated in the farmed animal movement. You can see from figure 7 that ACE funding goes more towards building alliances and capacity (the "long-term" parts of their ontology) than in the movement more generally.

(ACE argues that the amount is still too small. But it seems weird to criticize EAA for that, since ACE is doing better than the rest of the movement, and seems to be planning to do even more.)

In relation to short / medium term, i am saying that short term gains are more geared toward welfarism and *veg* approaches rather than projects such as rights / anti-speciesism in terms of anti-exploitation. So whilst we could view conventional EAA interventions as part of a bigger picture, we're not exploring these issues as part of how they fit together in a broader context, particularly in terms of different moral theories or how it is that different perspectives aim to reduce suffering. In the sense of what is funded / emphasised through effective altruism then there are conflicting overarching ideas which in my view need to be considered and resolved in order to be inclusive / representative.

For most organisations which already fit with "pragmatism" this is a bit of a non-issue. However, for those which are more politicised they can be marginalised in relation to how funding is allocated and how powerful alliances are constructed around ideology. This i would argue has happened with most of the large considered to be EA aligned organisations. This to me overlooks how narrow the framework for intervention actually is. To illustrate this point we can look at where problems have arisen with organisations ACE has considered evaluating such as A Well Fed World.

"Declined to be reviewed/published for the following reason(s):

  • They disagree with Animal Charity Evaluators’ evaluation criteria, methodology, and/or philosophy.
  • They do not support Animal Charity Evaluators’ decision to evaluate charities relative to one another."

Despite this outcome they don't appear a good fit for conventional EAA because the work they do is difficult to measure and the groups they support as part of their work so small it is difficult to measure their impact going forward (positive or negative). However, that potential impact is diminished (in terms of including different perspectives) further by favouring resourcing conventionally aligned organisations over those not part of the EAA family (which isn't to say they don't tacitly accept EA principles) which then grow at a much faster rate potentially crowding out other ideas and organisations. For those resourced and largely ideologically aligned i'm thinking of Animal Equality, The Humane League, Good Food Institute, Mercy for Animals, ProVeg, Reducetarian Foundation, Albert Schweitzer Foundation, Open Cages, Compassion In World Farming.

What happens here is that EAs tend to point toward funding directed toward cat / dog rescues over farmed animal protection, and it is correct to note how egregiously disproportionate that continues to be. However, within the somewhat delicate and nascent space of farmed animal protection, funding a small number of ideologically aligned groups has been disruptive in the movement as a whole (for instance affordability in terms of conferences, sponsorship, outreach and so on), and this impact hasn't been factored in (though it remains to be seen whether the new ACE Effective Animal Advocacy project will address some of these issues, though perhaps only implicitly). A further issue would arise that if it doesn't happen and if EA Funds doesn't shift beyond Lewis' general considerations then the new panel for EA Funds will present a missed opportunity. Lewis might be concerned about whether people would be a good fit and could agree on certain issues, but it seems unfortunate that conclusion was drawn before an attempt made to really challenge the foundation of EAA, for instance in relation to normative uncertainty. However, here it depends on what time Lewis would have to oversee that, and i suspect not enough to make it a viable possibility which i think illustrates the reason that underpins the new approach.

Traditionally, organisations that are more challenging to the "mainstream" have often struggled for funding (so therefore by the lights of many aren't very successful), and are often too small for Open Philanthropy to consider, or EA Funds at least up until now because of the time constraints involved in doing so (time spent per dollar donated). Indeed, it is challenging to present a case for many organisations, other than it is important to have multiple perspectives / organisations in a movement format, though, as Lewis pointed out in relation to EA Funds he also worries about discord. But this isn't a reason not to do that more challenging work, and neither are time constraints. If anything, these are fundamental considerations that ought to have been incorporated at the inception of EAA and the Open Philanthropy Animal Welfare Program, but it doesn't appear to me they ever really were. Partly because it appears EA leaned heavily on conventional organisational leaders of the larger animal organisations prior to EAA, and there isn't much evidence those leaders took those types of considerations onboard either. Particularly i'm thinking Paul Shapiro, Wayne Pacelle, Nick Cooney, Bruce Friedrich, Matt Ball who largely preferred an agenda and approach grounded in "pragmatism", something which was quite appealing to many utilitarians but not to rights advocates, who became unflatteringly associated with terms such as extremist, fundamentalist, absolutist, puritan, hardliner in the associated rhetoric. Their value further diminshed because a lack of pragmatism seemed to become equated with a lack of effectiveness.

None of this is to say that it is "wrong" to fund any of the top or standout ACE charities (for instance) from an EA perspective, but taken together it's a stretch even for effective altruism. So from my view funding is disproportionate, but this also reflects the view of the EAA trust network presumably. If we had a better idea of who exactly that was, including who the CEA was consulting then it would be easier to point out where adjustments could be made, so we might have diversity of viewpoints and representation within an EA framework, or we could at least consider how it is that it could function differently given a variety of scenarios / counterfactuals. Otherwise we have no real idea of how effective we are being collectively, we are instead looking at things from a fairly conventional EAA view, which from my perspective is loaded toward de-politicised short terms gains associated with "veg" and welfare approaches.

Dont forget the $13.3M grant the FHI got a few weeks ago, I felt like that was huge for EA

I didn't know about that. That's incredible!

Curated and popular this week
Relevant opportunities