Hi, 

I’ve been thinking about the current direction of the effective altruism (EA) movement, and I feel that it places heavy emphasis on donating to effective charities as the primary lever to improve the world (other than its weird and strange focus on some sci-fi doomsday scenarios regarding AI). While this focus makes sense given EA’s commitment to measurable impact, I’ve been wondering why there isn’t more attention on reducing harm through consumer choices—specifically, by avoiding or boycotting industries that actively cause harm to people or the planet.

The one area where EA consistently promotes boycotting is veganism, which encourages avoiding animal products to reduce animal suffering. While this is important, I rarely see similar discussions about boycotting companies that contribute to human rights violations or environmental destruction.

Take Nestlé, for example. Many people boycott the company due to its long record of controversies. A major concern is Nestlé’s role in water privatization. The company has extracted groundwater from drought-stricken regions—including parts of California—at extremely low cost, even when local communities object. Critics argue that Nestlé treats water, a basic human necessity, as a commodity for profit.

Nestlé has also faced decades of allegations regarding labor abuses, particularly in the cocoa industry. Reports of child labor and unsafe working conditions in West Africa have persisted despite company pledges to improve. Progress has been slow, and many believe Nestlé has known about these issues far longer than its public commitments suggest.

The infant formula scandal remains another defining example. Nestlé aggressively marketed baby formula in developing countries, sometimes implying it was healthier than breast milk. Families often diluted formula to stretch supply or mixed it with unsafe water, leading to malnutrition and illness in infants. Although this scandal is older, it continues to symbolize corporate irresponsibility.

Environmental and ethical criticisms add to the list. Nestlé’s massive production of bottled water and packaged foods contributes substantially to plastic waste, and the company has been tied to deforestation, carbon emissions, and unsustainable agricultural practices in its palm oil, cocoa, and dairy supply chains. Its marketing tactics—especially toward children in low-income countries—and concerns over animal welfare further motivate people to avoid its products.

Despite all this, I rarely see EA discussions about Nestlé or similar companies. Veganism is widely promoted, but the human-centered ethical concerns around businesses like Nestlé seem far less visible within EA spaces.

Coca-Cola faces similar public criticism. Environmental concerns are a major factor. In parts of India, Mexico, and Africa, communities have accused Coca-Cola of depleting groundwater and harming local agriculture due to the immense water demands of bottling plants. Labor-rights controversies have also surfaced—ranging from union-busting to allegations of violence linked to bottling operations in certain countries.

There are also political and social-justice dimensions. Some advocacy groups, including those aligned with the BDS movement, have called for boycotting Coca-Cola due to operations tied to contested regions involved in the Israel–Palestine conflict. For these individuals, purchasing Coca-Cola products feels like indirectly supporting policies they find unjust.

More broadly, I have noticed that EA spaces rarely discuss the BDS movement, boycotts related to human rights issues, or charitable giving aimed at supporting Palestinians. It makes me wonder whether the community holds an unintentional bias, or whether EA views these charities as ineffective, or something else entirely.

Boycotts extend even beyond global corporations. Avocados, for instance, have become controversial because cartel groups in certain regions of Mexico—particularly Michoacán—have infiltrated and extorted the avocado industry. Some consumers avoid avocados to avoid indirectly supporting criminal organizations that exploit farmers and use violence to control the supply chain.

Another example is Tesla, which some people boycott specifically because of human-rights concerns linked to cobalt mining. A significant portion of the cobalt used in lithium-ion batteries comes from the Democratic Republic of Congo, where mining often involves dangerous working conditions, low wages, and, in some cases, child labor. Even though Tesla—like most EV manufacturers—has taken steps to reduce or track cobalt use, critics argue that the supply chain remains opaque and that cobalt-dependent battery production continues to rely on exploitative labor systems. For these individuals, avoiding Tesla is a way to avoid contributing to a battery industry tied to human suffering.

Grok and other large language models developed by xAI are also subject to boycott discussions, mainly due to environmental concerns. Training and operating advanced AI models consumes enormous amounts of electricity and water, and requires large data centers that generate substantial carbon emissions. Some researchers estimate that training a single cutting-edge model can emit as much carbon as several cars do over their entire lifetimes. For this reason, some people avoid using Grok or refuse to financially support xAI, believing that doing so reduces demand for energy-intensive AI systems that contribute to climate damage.

An entire list of ongoing boycotts can be found on Ethical Consumer’s website ( https://www.ethicalconsumer.org/ethicalcampaigns/boycotts ), and many revolve around human rights, environmental harm, or the Israel–Palestine conflict. Yet EA discussions tend to focus almost exclusively on animal-related boycotts.

So my question is: why does EA emphasize one type of boycott (avoiding animal products) but largely ignore other forms of harm reduction through consumer behavior? I’m not trying to downplay the importance of animal welfare. However, it strikes me as inconsistent that EA strongly promotes veganism while offering little guidance on boycotts related to human rights abuses.

I understand that no one can make perfectly ethical choices and that tradeoffs are inevitable. We can’t avoid every harmful industry, and attempting total purity would be unsustainable. But does this mean that consumer boycotts have so little impact that they’re not worth promoting within EA? If so, why is veganism treated differently?

One could argue that individuals can “offset” their harmful consumption by donating to effective charities, but this mindset feels uncomfortable to me. If we can easily avoid a harm, shouldn’t we? It seems strange to justify a questionable purchase by saying, “I’ll just donate to make up for it.” At the same time, I agree that people need reasonable freedom—just as we don’t expect anyone to donate their entire income or adopt extreme asceticism.

Still, some boycotts do seem capable of yielding meaningful marginal reductions in harm. For example, reducing consumption of Nestlé chocolate at least slightly decreases demand for cocoa linked to child labor. Other boycotts may be less meaningful—such as avoiding a company solely because it has stores in Israel without any broader connection to harm.

Overall, I’m trying to understand whether EA’s focus on donations over boycotts is due to evidence, practicality, impact measurement, or something else. And if boycotts generally have minimal effect, why is veganism considered an exception?

In conclusion, what exactly am I expected to boycott to be considered an effective altruist, and what freedoms am still mine to enjoy? Naturally, I want to live with as much personal freedom as possible, but I also don’t want to neglect my moral obligations. Being vegan is already a significant commitment for me, and I’m unsure whether I’m ready to add avoiding AI tools, avocados, Coca-Cola, or Nestlé products to the list. I still want to have a life.

Thanks.

22

2
5

Reactions

2
5

More posts like this

Comments28
Sorted by Click to highlight new comments since:

EA is about more than just "a commitment to measurable impact" -- it's also about trying to find the most /effective/ ways to help others, which means investigating (and often doing back-of-the-envelope "importance, neglectedness, tractability" estimates) to prioritize the most important causes.

Take your Nestle example: although they make a convenient big corporate villain, so they often get brought up in debates about drought in California and elsewhere, they aren't actually a big contributor to the problem of drought since their water consumption is such a miniscule fraction of the state's total water use.  Rather than getting everyone to pressure Nestle, it would be much more effective for individuals to spend their time lobbying for slightly changing the rules around how states like California regulate and price water, or lobbying for the federal government to remove wasteful farm subsidies that encourage water waste on a much larger scale.

See here for more on this issue: https://slatestarcodex.com/2015/05/11/california-water-you-doing/

Some EAs might also add that the overall problem of water scarcity in California, or the problem of misleading baby formula ads (note the science is actually not clear on whether breastmilk is any better than formula; they seems about the same for babies' health! https://parentdata.org/what-the-data-actually-says-about-breastfeeding/ ) , or the problem that Coca Cola does business in Israel (doesn't it do business basically everywhere??), are simply less severe than the problem of animal suffering.  Although of course this depends on one's values.

Some other considerations about boycotts:

  • Many already consider veganism to be a pretty extreme constraint on one's diet that makes it harder to maintain a diet of tasty, affordable, easy-to-cook, nutritious food.  Add in "also no avocados, and nothing made by Nestle or Coca Cola, and nothing from this other long list of BDS companies, and also...", and this no longer sounds like an easy, costless way to make things a little better!  (Indeed, it starts to look more like a costly signal of ideological purity.)  https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one
  • Within the field of animal welfare, EA has actually pioneered the strategy of DOWNPLAYING the importance of veganism and other personal choices, in favor of a stronger emphasis on corporate pressure campaigns to get companies to adopt incrementally better standards for their animals.  This has turned out to be an extremely successful tactic (billions of chickens' lives improved over just a few years, meanwhile after decades of vegan activism the percentage of vegans/vegetarians in the USA remains about the same low number it's always been).  This lesson would seem to indicate that pushing for mass personal change (eg, to reduce climate emissions by boycotting flights) is perhaps generally less effective than other approaches (like funding scientific research into greener jet fuel, or lobbying for greater public investment in high-speed rail infrastructure).  
  • TBH, the way a lot of advocates talk about consumer boycotts makes me think they believe in the (satirical) "copenhagen interpretation of ethics", the theory that if you get involved in anything bad in any way, however tangiential (like drinking a soda made by the same company who also sells sodas to israelis, who live in a country that is oppressing the people of the west bank and gaza), that means you're "entangled" with the bad thing so it's kinda now your fault, so it's important to stay pure and unentangled so nobody can blame you.  https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics  I admit that following the Copenhagen Interpretation of ethics is a great way to avoid anyone ever blaming you for being complicit in something bad.  But EA is about more than just avoiding blame -- it's about trying to help others the most with the resources we have available.  That often means taking big actions to create goodness in the world, rather than having the "life goals of dead people" and simply trying to minimize our entanglement with bad things: https://thingofthings.substack.com/p/the-life-goals-of-dead-people
  • The EA community is pretty small.  Even if all 10,000 of us stopped eating Nestle products, that wouldn't be a very large impact, and it would draw attention away from worthier pursuits, like trying to incubate charities directly serving people in the poorest nations, instead of worrying that maybe a few cents of the five dollars I paid for avocado toast at a restaurant might work its way into the hands of a mexican cartel.

Fun fact: it's actually this same focus on finding causes that are important (potentially large in scale), neglected (not many other people are focused on them) and tractable, that has also led EA to take some "sci-fi doomsday scenarios" like wars between nuclear powers, pandemics, and AI risk, seriously.  Consider looking into it sometime -- you might be surprised how plausible and deeply-researched these wacky, laughable, uncool, cringe, "obviously sci-fi" worries really are! (Like that countries might sometimes go to war with each other, or that it might be dangerous to have university labs experimenting with creating deadlier versions of common viruses, or that powerful new technologies might sometimes have risks.)

When I mentioned Nestlé, my point was that the situation is somewhat analogous to veganism. For example, if someone buys chocolate produced with slave labor, that purchase contributes to supply and demand: the more such chocolate is bought, the more of it is produced. Veganism follows a similar logic—purchasing animal products signals demand, which leads to more animals being raised and slaughtered in their place. So individual consumption choices can, at least in theory, causally contribute to additional harm.

To be honest, I feel like you focused on the weaker examples I gave, such as Nestlé’s role in the California drought or Coca-Cola’s operations in Israel. There are stronger, more morally serious concerns, such as the documented use of child and slave labor in parts of the chocolate industry. Financially supporting that industry seems likely to increase its operations, total output, and ultimately the number of exploited laborers and total suffering. Similarly, Coca-Cola has faced major controversies over water extraction in Africa and previously in India, where communities experienced significant harm from overuse of local water resources. These situations suggest that reducing demand for specific products may indeed reduce the scale of the associated harm, even if boycotts alone are not enough to produce systemic change. I agree that boycotts rarely succeed in transforming entire industries unless large numbers of people participate, but even small reductions in demand might still correlate with smaller harms—just as veganism does not end factory farming but may marginally reduce the total number of animals killed. So while I understand concerns about “purity culture,” I think the issue is more complex: some personal choices do have real, non-negligible downstream consequences for suffering.

That said, I am genuinely curious about what you consider to fall within the realm of morally permissible personal actions. In my original post, I emphasized that I want to preserve as much personal freedom as possible without completely violating moral obligations. But I’m unsure where the line should be drawn. What counts as negligible harm, and what counts as morally relevant? With something like slave labor in the chocolate supply chain, the impact of an individual purchase is very hard to quantify. How does one even begin to calculate that impact with any precision?

Finally, I remain quite skeptical of the heavy emphasis on “longtermism.” I’m not denying that nuclear war, pandemics, or other existential risks are important, but these seem like issues far beyond the influence of any individual person. They are mainly the domain of governments, policymakers, and international institutions, and addressing them relies on trusting political systems rather than individual actions. By contrast, donating to save kids from malaria or starvation has clear, measurable, immediate effects on saving lives. If I donate money to Gaza, I know my money is being used to put food on the table for innocent kids. I don't exactly know what my money is being used on for AI academic "research". My concern is that focusing on very low-probability, extremely high-stakes scenarios resembles a classic problem in consequentialism: when potential outcomes are enormous enough, even a tiny probability can outweigh more certain but smaller benefits. It’s like the thought experiment of a cult leader claiming the world will end unless we sacrifice one person. Even if there is a 99.99% chance he is lying, the small remaining probability—if taken seriously—would mathematically “justify” the sacrifice, even though this conclusion feels deeply counterintuitive. Longtermism can sometimes seem to rely on similar logic, where speculative future catastrophes overshadow clearly identifiable present-day suffering.

Hi; thanks for this thoughtful reply!

I agree that with chocolate and exploited labor, the situation is similar to veganism insofar as if you buy some chocolate, then (via the mechanisms of supply and demand) that means more chocolate is gonna be harvested (although not necessarily by harvested by that particular company, right? so I think the argument works best only if the entire field of chocolate production is shot through with exploited labor?).  Although, as Toby Chrisford points out in his comment, not all boycott campaigns are like this.

Thoughts on chocolate in particular

Reading the wikipedia page for chocolate & child labor, I agree that this seems like a more legit cause than "water privatization" or some of the other things I picked on.  But if you are aiming for a veganism-style impact through supply and demand, it makes more sense to boycott chocolate in general, not a specific company that happens to make chocolate.  (Perplexity says that Nestle controls only a single-digit percentage of the world's chocolate market, "while the vast majority is produced by other companies such as Mars, Mondelez, Ferrero, and Hershey" -- nor is Nestle even properly described as a chocolate company, since only about 15% of their revenue comes from chocolate!  More comes from coffee, other beverages, and random other foods.)

In general I just get the feeling that you are choosing what to focus on based on which companies have encountered "major controversies" (ie charismatic news stories), rather than making an attempt to be scope-sensitive or thinks strategically.

"With something like slave labor in the chocolate supply chain, the impact of an individual purchase is very hard to quantify."

Challenge accepted!!!  Here are some random fermi calculations that I did to help me get a sense of scale on various things:

  • Google says that the average american consumes 100 lbs of chicken a year, and broiler chickens produce about 4 lbs of meat, so that's 25 broiler chickens per year.  Broiler chickens only live for around 8 weeks, so 25 chickens = at any given time, about four broiler chickens are living in misery in a factory farm, per year, per american.  Toss in 1 egg-laying hen to produce about 1 egg per day, that's five chickens per american.
    • How bad is chicken suffering?  Idk, not that bad IMO, chickens are pretty simple.  But I'm not a consciousness scientist (and sadly, nor is anybody else), so who knows!
  • Meanwhile with chocolate, the average american apparently consumes about 15 pounds of chocolate per year.  (Wow, that's a lot, but apparently europeans eat even more??) The total worldwide market for chocolate is 16 billion pounds per year.  Wikipedia says that around 2 million children are involved in child-labor for harvesting cocoa in West Africa, while Perplexity (citing this article) estimates that "Including farmers’ families, workers in transport, trading, processing, manufacturing, marketing, and retail, roughly 40–50 million people worldwide are estimated to depend on the cocoa and chocolate supply chain for their income or employment."
    • So the average American's share of global consumption (15 / 16 billion, or about 1 billionth) is supporting the child labor of 2 million / 1 billion = 0.002 West African children.  Or, another way of thinking about this is that (assuming child laborers work 12-hour days every day of the year, which is probably wrong but idk), the average American's yearly chocolate consumption supports about 9 hours of child labor, plus about 180 hours of labor from all the adults involved in "transport, trading processing, manufacturing, marketing, and retail", who are hopefully mostly all legitly-employed.
  • Sometimes for a snack, I make myself a little bowl of mixed nuts + dark chocolate chips + blueberries.  I buy these little 0.6-pound bags of dark chocolate chips for $4.29 at the grocery store (which is about as cheap as it's possible to buy chocolate); each one will typically last me a couple months.  It's REALLY dark chocolate, 72% cacao, so maybe in terms of child-labor-intensity, that's equivalent to 4x as much normal milk chocolate, so child-labor-equivalent to like 2.5 lbs of milk chocolate?  So each of these bags of dark chocolate involves about 1.5 hours of child labor.
    • The bags cost $4.29, but there is significant consumer surplus involved (otherwise I wouldn't buy them!)  Indeed, I'd probably buy them even if they cost twice as much!  So let's say that the cost of my significantly ycutting back my chocolate consumption is about $9 per bag.
    • So if I wanted to reduce child labor, I can buy 1 hour of a child's freedom at a rate of about $9 per bag / 1.5 hours per bag = $6 per hour.  (Obviously I can only buy a couple hours this way, because then my chocolate consumption would hit zero and I can't reduce any more.)
      • That's kind of expensive, actually!  I only value my own time at around $20 - $30 per hour!
      • And it looks doubly expensive when you consider that givewell top charities can save an african child's LIFE for about $5000 in donations -- assuming 50 years life expectancy and 16 hours awake a day, that's almost 300,000 hours of being alive versus dead.   Meanwhile, if me and a bunch of my friends all decided to take the hit to our lifestyle in the form of foregone chocolate consumption instead of antimalarial bednet donations, that would only free up something like 833 hours of an african child doing leisure versus labor (which IMO seems less dramatic than being alive versus dead).
      • One could imagine taking a somewhat absurd "offsetting" approach, by continuing to enjoy my chocolate but donating 3 cents to Against Malaria Foundation for each bag of chocolate I buy -- therefore creating 1.8 hours of untimely death --> life in expectation, for every 1.5 hours of child labor I incur.

Sorry to be "that guy", but is child labor even bad in this context?  Is it bad enough to offset the fact that trading with poor nations is generally good?

  • Obviously it's bad for children (or for that matter, anyone), who ought to be enjoying their lives and working to fulfill their human potential, to be stuck doing tedious, dangerous work. But, it's also bad to be poor!
  • Most child labor doesn't seem to be slavery -- the same wikipedia page that cites 2 million child laborers says there are estimated to be only 15,000 child slaves. (And that number includes not just cocoa, but also cotton and coffee.)  So, most of it is more normal, compensated labor. (Albeit incredibly poorly compensated by rich-world standards -- but that's everything in rural west africa!)
  • By analogy with classic arguments like "sweatshops are good actually, because they are an important first step on the ladder of economic development, and they are often a better option for poor people than their realistic alternatives, like low-productivity agricultural work", or the infamous Larry Summers controversy (no, not that one, the other one.  no, the OTHER other one.  no, not that one either...) about an IMF memo speculating about how it would be a win-win situation for developed countries to "export more pollution" to poorer nations, doing the economic transaction whereby I buy chocolate and it supports economic activity in west africa (an industry employing 40 million people, only 2 million of whom are child laborers) seems like it might be better than not doing it.  So, the case for a personal boycott of chocolate seems weaker than a personal boycott of factory-farmed meat (where many of the workers are in the USA, which has much higher wages and much tighter / hotter labor markets).

"I am genuinely curious about what you consider to fall within the realm of morally permissible personal actions."

This probably won't be a very helpful response, but for what it's worth:

  • I don't think the language of moral obligations and permissibility and rules (what people call "deontology") is a very good way to think about these issues of diffuse, collective, indirect harms like factory farming or labor exploitation.
    • As you are experiencing, deontology doesn't offer much guidance on where to draw the line when it comes to increasingly minor, indirect, or incidental harms.
    • It's also not clear what to do when there are conflicting effects at play -- if an action is good for some reasons but also bad for other reasons.
    • Deontology doesn't feel very scope-sensitive -- it just says something like "don't eat chocolate if child labor is involved!!" and nevermind if the industry is 100% child labor or 0.01% child labor.  This kind of thinking seems to have a tendency to do the "copenhagen theory of ethics" thing where you just pile on more and more rules in an attempt to avoid being entangled with bad things, when instead it should be more concerned with identifying the most important bad things and figuring out how to spend extra energy addressing those, even while letting some more minor goals slide.
  • I think utilitarianism / consequentialism is a better way to think about diffuse, indirect harms, because it's more scope-sensitive and it seems to allow for more grey areas and nuance. (Deontology just says that you must do some things and mustn't do other forbidden things, and is neutral on everything else.  But consequentialism rates actions on a spectrum from super-great to super-evil, with lots of medium shades in-between.)  It's also better at balancing conflicting effects -- just add them all up!
  • Of course, trying to live ordinary daily life according to 100% utilitarian thinking and ethics feels just as crazy as trying to live life according to 100% deontological thinking.  Virtue ethics often seems like a better guide to the majority of normal daily-life decisionmaking: try to behave honorably, try to be be caring and prudent and et cetera, doing your best to cultivate and apply whatever virtues seem most relevant to the situation at hand.
  • Personally, although I philosophically identify as a pretty consequentialist EA, in real life I (and, I think, many people) rely on kind of a mushy combination of ethical frameworks, trying to apply each framework to the area where it's strongest.
    • As I see it, that's virtue ethics for most of ordinary life -- my social interactions, how I try to motivate myself to work and stay healthy, what kind of person I aim to be.
    • And I try to use consequentialist / utilitarian thinking to figure out "what are some of the MOST impactful things I could be doing, to do the MOST good in the world".  I don't devote 100% of my efforts to doing this stuff (I am pretty selfish and lazy, like to have plenty of time to play videogames, etc), but I figure if I spend even a smallish fraction of my time (like 20%) aimed at doing whatever I think is the most morally-good thing I could possibly do, then I will accomplish a lot of good while sacrificing only a little.  (In practice, the main way this has played out in my actual life is that I left my career in aerospace engineering in favor of nowadays doing a bunch of part-time contracting to help various EA organizations with writing projects, recruiting, and other random stuff.  I work a lot less hard in EA than I did as an aerospace engineer -- like I said, I'm pretty lazy, plus I now have a toddler to take care of.)
    • I view deontological thinking as most powerful as a coordination mechanism for society to enforce standards of moral behavior.  So instead of constantly dreaming up new personal moral rules for myself (although like everybody I have a few idiosyncratic personal rules that I try to stick to), I try to uphold the standards of moral behavior that are broadly shared by my society.  This means stuff like not breaking the law (except for weird situations where the law is clearly unjust), but also more unspoken-moral-obligation stuff like supporting family members, plus a bunch of kantian-logic stuff like respecting norms, not littering, etc (ie, if it would be bad if everyone did X, then I shouldn't do X).
      • But when it comes to pushing for new moral norms (like many of the proposed boycott ideas) rather than respecting existing moral norms, I'm less enthusiastic.  I do often try to be helpful towards these efforts on the margin, since "marginal charity" is cheap.  (At least I do this when the new norm seems actually-good, and isn't crazy virtue-signaling spirals like for example the paper-straws thing, or counterproductive in other ways like just sapping attention from more important endeavors or distracting from the real cause of a problem.)  But it usually doesn't seem "morally obligatory" (ie, in my view of how to use deontology, "very important for preserving the moral fabric of society and societal trust") to go to great lengths to push super-hard for the proposed new norms.  Nor does it usually seem like the most important thing I could be doing.  So beyond a token, marginal level of support for new norms that seem nice, I usually choose to focus my "deliberately trying to be a good person" effort on trying to do whatever is the most important thing I could be doing!

Thoughts on Longtermism

I think your final paragraph is mixing up two things that are actually separate:

1. "I'm not denying [that x-risks are important] but these seem like issues far beyond the influence of any individual person. They are mainly the domain of governments, policymakers... [not] individual actions."

2. "By contrast, donating to save kids from malaria or starvation has clear, measurable, immediate effects on saving lives."

I agree with your second point that sadly, longtermism lacks clear, measurable, immediate effects.  Even if you worked very hard and got very lucky and accomplished something that /seems/ like it should be obviously great from a longtermist perspective (like, say, establishing stronger "red phone"-style nuclear hotline links between the US and Chinese governments), there's still a lot of uncertainty about whether this thing you did (which maybe is great "in expectation") will actually end up being useful (maybe the US and China never get close to fighting a nuclear war, nobody ever uses the hotline, so all the effort was for naught)!  Even in situations where we can say in retrospect that various actions were clearly very helpful, it's hard to say exactly HOW helpful.  Everything feels much more mushy and inexact.

Longtermists do have some attempted comebacks to this philosophical objection, mostly along the lines of "well, your near-term charity, and indeed all your actions, also affect the far future in unpredictable ways, and the far future seems really important, so you can't really escape thinking about it".  But also, on a much more practical level, I'm very sympathetic to your concern that it's much harder to figure out where to actually donate money to make AI safety go well than to improve the lives of people living in poor countries or help animals or whatever else -- the hoped-for paths to impact in AI are so much more abstract and complicated, one would have to do a lot more work to understand them well, and even after doing all that work you might STILL not feel very confident that you've made a good decision.  This very situation is probably the reason why I myself (even though I know a ton about some of these areas!!) haven't made more donations to longtermist cause areas.

But I disagree with your first point, that it's beyond the power of individuals to influence x-risks or do other things to make the long-term future go well, rather it's up to governments. And I'm not just talking about individual crazy stories like that one time when Stanislav Petrov might possibly have saved the world from nuclear war.  I think ordinary people can contribute in a variety of reasonably accessible ways:

  • I think it's useful just to talk more widely about some of the neglected, weird areas that EA works on -- stuff like the risk of power concentration from AI,  the idea of "gradual disempowerment" over time, topics like wild animal suffering, the potential for stuff like prediction markets and reforms like approval voting to improve the decisionmaking of our political institutions, et cetera.  I personally think this stuff is interesting and cool, but I also think it's societally beneficial to spread the word about it.  Bentham's Bulldog is, I think, an inspiring recent example of somebody just posting on the internet as a path to having a big impact, by effectively raising awareness of a ton of weird EA ideas.
  • If you're just like "man, this x-risk stuff is so fricking confusing and disorienting, but it does seem like in general the EA community has been making an outsized positive contribution to the world's preparedness for x-risks", then there are ways to support the EA community broadly (or other similar groups that you think are doing good) -- either through donations, or potentially through, like, hosting a local EA meetups, or (as I do) trying to make a career out of helping random EA orgs with work they need to get done.
  • Some potential EA cause areas are niche enough that it's possible to contribute real intellectual progress by, again, just kinda learning more about a topic where you maybe bring some special expertise or unique perspective to an area, and posting your own thoughts / research on a topic.  Your own post (even though I disagree with it) is a good example of this, as are so many of the posts on the Forum!  Another example that I know well is the "EcoResilience Initiative", a little volunteer part-time research project / hobby run by my wife @Tandena Wagner -- she's just out there trying to figure out what it means to apply EA-style principles (like prioritizing causes by importance, neglectedness, and tractability) to traditional environmental-conservation goals like avoiding species extinctions.  Almost nobody else is doing this, so she has been able to produce some unique, reasonably interesting analysis just by sort of... sitting down and trying to think things through!

Now, you might reasonably object: "Sure, those things sound like they could be helpful as opposed to harmful, but what happened to the focus on helping the MOST you possibly can!  If you are so eager to criticize the idea of giving up chocolate in favor of the hugely more-effective tactic of just donating some money to givewell top charities, then why don't you also give up this speculative longtermist blogging and instead try to earn more money to donate to GiveWell?!"  This is totally fair and sympathetic.  In response I would say:

  • Personally I am indeed convinced by the (admittedly weird and somewhat "fanatical") argument that humanity's long-term future is potentially very, very important, so even a small uncertain effect on high-leverage longtermist topics might be worth a lot more than it seems.
    • I also have some personal confidence that some of the random, very-indirect-path-to-impact stuff that I get up to, is indeed having some positive effects on people and isn't just disappearing into the void.  But it's hard to communicate what gives me that confidence, because the positive effects are kind of illegible and diffuse rather than easily objectively measurable.
    • I also happen to be in a life situation where I have a pretty good personal fit for engaging a lot with longtermism -- I happen to find the ideas really fascinating, have enough flexibility that I can afford to do weird part-time remote work for EA organizations instead of remaining in a normal job like my former aerospace career, et cetera.  I certainly would not advise any random person on the street to quit their job and try to start an AI Safety substack or something!!
  • I do think it's good (at least for my own sanity) to stay at least a little grounded and make some donations to more straightforward neartermist stuff, rather than just spending all my time and effort on abstract longtermist ideas, even if I think the longtermist stuff is probably way better.

Overall, rather than the strong and precise claim that "you should definitely do longtermism, it's 10,000x more important than anything else", I'd rather make the weaker, broader claims that "you shouldn't just dismiss longtermism out of hand; there is plausibly some very good stuff here" and that "regardless of what you think of longermism, I think you should definitely try to adopt more of an EA-style mindset in terms of being scope-sensitive and seeking out what problems seem most important/tractable/neglected, rather than seeing things too much through a framework of moral obligations and personal sacrifice, or being unduly influenced by whatever controversies or moral outrages are popular / getting the most news coverage / etc."

Hi, thanks again for the detailed reply — I really appreciate the clarity. I’m finding it genuinely eye-opening that many issues I assumed were morally significant turn out to matter far less in practice once scale and impact are properly quantified. I think I was heavily influenced by various online movements that are very loud and visible, so it confused me that EA rarely foregrounded topics like slave labor in chocolate or Coca-Cola’s water practices, despite covering other global issues such as malaria.

One thing I do want to clarify is that there are ethical chocolate companies using fair-trade, non–child-labor supply chains, so it’s not that “all chocolate must be boycotted,” but rather that many major brands have problematic sourcing. Still, your calculations make it clear that a solo boycott makes essentially no difference to the working hours or conditions of any child laborer, and similarly an individual boycott won’t meaningfully affect things like water extraction by Coca-Cola in Africa or India.

I am also not sure what you mean regarding your calculations about buying a child's freedom for $9 per hour and also the whole valuing your own hour by $20-30. I mean to be honest from a consequentialist perspective there isn't a difference between personally doing harm or letting harm to continue but in this case you aren't I guess buying a child's freedom you are just not forcing them to work an additional hour if that reframing makes sense. Kinda like veganism doesn't save lives its just not about killing additional lives. To put it quite simply, by refusing to by slave labor chocolate you are not really helping people you are just not hurting them (and buying the chocolate leads to harm).

On the broader question of morally permissible actions, I’ve been strongly shaped by this Aeon article (“Why it is better not to aim at being morally perfect”). I agree that doing genuine moral good matters, but being a 10/10 moral saint is neither realistic nor psychologically healthy. That’s why I find Schelling points useful — for example, the 10% pledge. Without Schelling points, it feels like the only consistent utilitarian answer would be to live extremely frugally and donate almost everything. So my original question was really about which actions rise to the level of meaningful Schelling points. It seems that many things that online activists frame as huge moral imperatives (boycotting certain products, etc.) actually have very small expected impact and thus probably don’t qualify.

On veganism: I’ve been extremely strict (even avoiding foods with small amounts of egg or dairy, even while traveling), but seeing that roughly 75% of the EA Forum isn’t vegan does make me wonder whether relaxing a bit would still be morally acceptable. At the same time, I’m not fully comfortable with an attitude of “it’s fine to cause some harm as long as I donate to GiveWell later,” since that can be used to rationalize almost anything (e.g., “I’ll do X harmful thing like murder a man and offset it with $5k to AMF”). I understand the logic in small, low-impact cases, but taken broadly it seems like a slippery ethical framing.

A (slightly personal) question: do you think one could argue that you might actually have more impact as an aerospace engineer donating 10% of your income than by doing local EA organization work? I imagine it depends heavily on the quality of the contributions and the kinds of community-building work being done, but I’m curious how you think about that tradeoff.

Regarding longtermism: I’ll admit I’m somewhat biased. I’ve absorbed a lot of the “nothing ever happens” attitude, so doomsday scenarios often feel exaggerated to me. But setting that aside, I can acknowledge that global catastrophic risks like nuclear conflict, pandemics, and climate instability are real and non-zero. We literally just lived through a pandemic. My concern is that nearly all meaningful action in these areas ultimately seems to run through political institutions. Research can help, but if political leaders are uninformed or uninterested, the marginal value of EA research feels limited. That sense might also be influenced by my experience with college ethics classes — AI ethics, especially, often felt detached from real-world levers.

Realistically, it seems like the most impactful thing an individual can do for x-risk at the moment is vote for politicians who take these issues seriously, but politicians who are aware of (or influenced by) effective altruism seem rare.

Finally, several of the replies have made me think about the prisoner’s dilemma dynamic underlying many collective-action problems. With things like chocolate, it seems like individual action is (almost) negligible. Veganism is different because the per-unit harm is much larger. But I’m curious how EA generally thinks about prisoner’s dilemmas in areas like climate change, voting, or even the Donation Election. Why should I vote in the Donation Election if my individual vote is almost certainly not decisive? Or more broadly, when do extremely low-probability marginal contributions still matter?

Thanks again — the discussion has been really helpful in clarifying what actually matters versus what merely feels morally salient.

Re: veganism, have you seen the FarmKind compassion calculator? https://www.farmkind.giving/compassion-calculator#try-it

It will tell you how many animals are raised for your food, depending on your dietary type, and how much money would be needed in donations to offset that.

The moral upshot here is that eggs are far worse than dairy from an animal welfare perspective, mostly because cows are a lot larger than chickens. So if you feel like adding animal products to make your life convenient but worry about suffering, add dairy products.

And also donate to effective animal charities. There's no reason to stick to the few $ per month (or fraction of a $ if it's dairy) needed to offset the suffering from your diet - you can do much more good than that. Most EAs aren't really into offsetting. We don't actually think you should donate less to something because it's more effective. This is just a calculator to attempt to explain more broadly why effective animal advocacy giving is good.

I skimmed through the website, and I’m not entirely sure how they’re calculating the dollar amounts. The comparisons also seem somewhat subjective, and some of the proposed impacts (e.g., creating more plant-based meat options) don’t obviously translate into measurable reductions in meat consumption.

I’m also not sure what they mean by this statement:

“We don’t actually think you should donate less to something because it’s more effective.”


 

“We don’t actually think you should donate less to something because it’s more effective.”

(All the below numbers are made up for example purposes and don't represent the cost of chicken-related interventions)

Let's say that I want to have chicken for dinner tonight. However, I don't want to cause chickens to suffer. I have worked out that by donating $0.10 to Chicken Charity A I can prevent the same amount of suffering that eating a chicken dinner would cause, so I do that. Then I find out that Chicken Charity B can do the same thing for $0.05, so I do that instead for tomorrow night's chicken dinner. A charity being 2x as effective means I donate half as much to it. This is the "offsetting" mindset.

Effective Altruists do not (usually) think this way. We don't consider our donations as aiming to do a fixed amount of good and maximise effectiveness in order to reduce the amount we have to donate. We do it the other way around, usually: a fixed amount that is set by our life circumstances (e.g. the 10% pledge) and maximising the effectiveness of that in order to do as much good as possible.

I'm a little confused by the claim that "personal choices" aren't effective, but corporate pressure campaigns are. Isn't the way a corporate pressure campaign works that you convince the target that they will be boycotted unless they make the changes you are demanding? So the corporate pressure campaign is only effective if you have people that are willing to change their personal choices. Or am I misunderstanding and that's not how corporate pressure campaigns work?

I'm not an expert about this, but my impression (from articles like this: https://coefficientgiving.org/research/why-are-the-us-corporate-cage-free-campaigns-succeeding/ , and websites like Animal Ask) is that the standard EA-style corporate campaign involves:

  • a relatively small number of organized activists (maybe, like, 10 - 100, not tens of thousands)...
  • ...asking a corporation to commit to some relatively cheap, achievable set of reforms (like switching their chickens to larger cages or going cage-free, not like "you should all quit killing chickens and start a new company devoted to ecological restoration")
  • ...while also credibly threatening to launch a campaign of protests if the corporation refuses
  • Then rinse & repeat for additional corporations / additional incremental reforms (while also keeping an eye out to make sure that earlier promises actually get implemented).

My impression is that this works because the corporations decide that it's less costly for them to implement the specific, limited, welfare-enhancing "ask" than to endure the reputational damage caused by a big public protest campaign.  The efficacy doesn't depend at all on a threat of boycott by the activists themselves.  (After all, the activists are probably already 100% vegan, lol...)

You might reasonably say "okay, makes sense, but isn't this just a clever way for a small group of activists to LEVERAGE the power of boycotts?  the only reason the corporation is afraid of the threatened protest campaign is because they're worried consumers will stop buying their products, right?  so ultimately the activists' power is deriving from the power of the mass public to make individual personal-consumption decisions".

This might be sorta true, but I think there are some nuances:

  • i don't think the theory of change is that activists would protest and this would kick off a large formal boycott -- most people don't ever participate in boycotts, etc.  instead, I think the idea is that protests will create a vague haze of bad vibes and negative associations with a product (ie the protests will essentially be "negative advertisements"), which might push people away from buying even if they're not self-consciously boycotting.  (imagine you usually go to chipotle, but yesterday you saw a news story about protestors holding pictures of gross sad caged farmed chickens used by chipotle -- yuck!  this might tilt you towards going to a nearby mcdonalds or panda express instead that day, even though ethically it might make no sense if those companies use equally low-welfare factory-farmed chicken)
  • corporations apparently often seem much more afraid of negative PR than it seems they rationally ought to be based on how much their sales would realistically decline (ie, not much) as a result of some small protests.  this suggests that much of the power of protests is flowing through additional channels that aren't just the immediate impact on product sales
  • even if in a certain sense the cage-free activists' strategy relies on something like a consumer boycott (but less formal than a literal boycott, more like "negative advertising"), that still indicates that it's wise to pursue the leveraged activist strategy rather than the weaker strategy of just trying to be a good individual consumer and doing a ton of personal boycotts
  • in particular, a key part of the activists' power comes from their ability to single out a random corporation and focus their energies on it for a limited period of time until the company agrees to the ask.  this is the opposite of the OP's diffuse strategy of boycotting everything a little bit (they're just one individual) all the time
  • it's also powerful that the activists can threaten big action versus no-action over one specific decision the corporation can make, thus creating maximum pressure on that decision.  Contrast OP -- if Nestle cleaned up their act in one or two areas, OP would probably still be boycotting them until they also cleaned up their act in some unspecified additional number of areas.
  • We've been talking about animal welfare, which, as some other commenters have notes, has a particularly direct connection to personal consumption, so the idea of something like a boycott at least kinda makes sense, and maybe activists' power is ultimately in part derived from boycott-like mechanisms.  But there are many political issues where the connection to consumer behavior is much more tenuous and indirect.  Suppose you wanted to reduce healthcare costs in the USA -- would it make sense to try and get people to boycott certain medical procedures (but people mostly get surgeries when they need them, not just on a whim) or insurers (but for most people this comes as a fixed part of their job's benefits package)??  Similarly, if you're a YIMBY trying to get more homes built, who do you boycott?  The problem is really a policy issue of overly-restrictive zoning rules and laws like NEPA, not something you could hope to target by changing your individual consumption patterns.  This YIMBY example might seem like a joke, but OP was seriously suggesting boycotting Nestle over the issue of California water shortages, which, like NIMBYism, is really mostly a policy failure caused by weird farm-bill subsidies and messed-up water-rights laws that incentivize water waste -- how is pressure on Nestle, a european company, supposed to fix California's busted agricultural laws??  Similarly, they mention boycotting coca-cola soda because coca-cola does business in israel. How is reduced sales for the coca-cola company supposed to change the decisions of Bibi Netanyahu and his ministers?? One might as well refuse to buy Lenovo laptops or Huawei phones in an attempt to pressure Xi Jinping to stop China's ongoing nuclear-weapons buildup... surely there are more direct paths to impact here!

I find it paradoxical that the signature strategy of a major cause area -- threatening a "haze of bad vibes and negative associations" if a corporation doesn't somewhat clean up its animal-welfare record -- probably would be ineffective if everyone acted like EAs. The mechanism of action is still dependent on individual consumer choice ("tilt you towards going to a nearby mcdonalds or panda express instead that day") and the commentariat can be read as implying to OP that making individual consumption decisions based on such considerations is too low-impact to pay attention to.

There's something that feels vaguely non-cooperative about this -- we're dependent on other people responding to our threatened PR campaigns regarding animal welfare (or at least on corporations believing other people would respond), but seem not interested in cooperating with other people's altruistic PR campaigns. I'm not sure there is anything practical to do with with this mood -- other than encourage OP to present clearer models of impact for the interventions they mentioned -- but I think it is worthwhile to acknowledge the mood. And maybe if you have a choice in the supermarket between a Nestle and non-Nestle chocolate product, consider purchasing the latter?

Agreed that it's a weird mood, but perhaps inevitable.

In terms of the inequality between running PR campaigns but "not interesting cooprating with other people's altruistic PR campaigns": insofar as attention is ultimately a fixed resource, it's an intrinsically adversarial situation between different attempts to capture peoples' attention.  (Although there are senses in which this is not true -- many causes are often bundled together in a political alliance.  And there could even be a broader cultural shift towards people caring more about behaving ethically, which would perhaps "lift all boats" in the do-gooder PR-campaign space!)  Nevertheless, given the mostly fixed supply of attention, it certainly seems fine to steal eyeballs for thoughtful, highly-effective causes that would otherwise be watching Tiktok, and it seems similarly fine to steal eyeballs for good causes that would otherwise have gone to dumb, counterproductive causes (like the great paper-straw crusade).  After that, it seems increasingly lamentable to steal eyeballs from increasingly reasonably-worthy causes, until you get to the level of counterproductive infighting among people who are all trying hard to make the world a better place.  Of course, this is complicated by the fact that everyone naturally thinks their own cause is worthier than others.  Nevertheless, I think some causes are worthier than others, and fighting to direct attention towards the worthiest causes is a virtuous thing to do -- perhaps even doing one's civic duty as a participant in the "marketplace of ideas".

In terms of the inequality between organizers (who are being high-impact only because others are low impact) vs consumers whose behavior is affected:

  • This is omnipresent everywhere in EA, right?  Mitigating x-risks is only high-impact because the rest of the world is neglecting it so badly!
  • Are we cruelly "stealing their impact"?  I mean, maybe??  But this doesn't seem so bad, because other people don't care as much about impact.  Conversely, some causes are much better than EA at going viral and raising lots of shallow mass awareness -- but this isn't so terrible from EA's perspective, because EA doesn't care as much about going viral.
  • But talk of "stealing impact" is weird and inverted... Imagine if everyone turned EA and tried to do the most high-impact thing.  In this world, it might harder to have very high impact, but this would hardly be cause for despair, because the actual world would be immensely better off!  It seems perverse to care about imagined "impact-stealing" rather than the actual state of the world.
  • It also seems like a fair deal insofar as the organizers have thought carefully and worked hard (a big effort), while it's not like the consumers are being coerced into doing menial low-impact gruntwork for long hours and low pay; they're instead making a tiny, nearly unconscious choice between two very similar options.  In a way, the consumers are doing marginal charity, so their impact is higher than it seems.  But asking people to go beyond marginal charity and make costlier sacrifices (ie, join a formal boycott, or consciously keep track of long lists of which companies are good versus bad) seems like more of an imposition.

Re: Nestle in particular, I get the spirit of what you're saying, although see my recent long comment where I try to think through the chocolate issue in more detail.  As far as I can tell, the labor-exploitation problems are common to the entire industry, so switching from Nestle to another brand wouldn't do anything to help??  (If anything, possibly you should be switching TOWARDS nestle, and away from companies like Hershey's that get a much higher % of their total revenue from chocolate?)

I think this spot-check about Nestle vs cocoa child labor (and about Nestle vs drought, and so forth) illustrates my point that there are a lot of seemingly-altruistic PR campaigns that actually don't do much good.  Perhaps those PR campaigns should feel bad for recruiting so much attention only to waste it on a poorly-thought-out theory of impact!

I don't think that the analogy between X-risk work and this kind of protest makes sense.

The reason X-risk work is so impactful is that very few people are working on X-risk at all. As you say, if more people worked on X-risk, the (marginal) impact of each one would be lower, but that's a good thing because more work would be getting done.

The claim being made about the animal welfare activists is that the mechanism of change relies on both the "high-impact" organizers, as well as the "low-impact" responsive consumers who will change their behavior in response to the protests. I think Jason's point is that:

(a) it doesn't make sense to call the organizers "high-impact" and the responsive consumers "low-impact", if both of these groups are necessary for the protest to have impact at all,

(b) if we, as EAs, take the "organizer" role in our campaigns, we're expecting a bunch of people to take the "responsive consumer" role, even if they don't care as much about the issue as we do. So the cooperative thing to do would be to ourselves take the "responsive consumer" role in campaigns that others are organizing, even if we don't care as much about the issue as those organizers do.

--

I do, however, think that (b) only applies to cases where there is an organized protest. If there was a prominent group of anti-Nestle protesters who had specific demands of Nestle that had a reasonable chance of being adopted and that would lead to positive impact, and they were protesting because Nestle didn't do those, then maybe this argument would counsel that we should support them if it doesn't cost too much. But I don't really think this applies to the OP, who seemed to be suggesting that we should do a bunch of one-person "personal boycotts", which I don't think will have much impact.

The boycott of Nestlé isn’t solely an individual action; there are others who also avoid Nestlé, Amazon, and similar companies. That said, these efforts remain relatively small in scale and don’t constitute a large, coordinated movement.

Re: Nestle in particular, I get the spirit of what you're saying, although see my recent long comment where I try to think through the chocolate issue in more detail.  As far as I can tell, the labor-exploitation problems are common to the entire industry, so switching from Nestle to another brand wouldn't do anything to help??

That could be correct. But I think the flip side of my individual chocolate purchasing decisions aren't very impactful is that maybe we should defer under some circumstances to the people who have thought a lot about these kinds of issues, even if we think their modeling isn't particularly good. Weak modeling is probably better, in expectancy, than no modeling at all -- and developing our own models may not be an impactful use of our time. Or stated differently, I would expect the boycott targets identified by weak modeling to be more problematic actors in expectancy than if we chose our chocolate brands by picking a brand out of a hat.[1] (This doesn't necessarily apply to boycotts that are not premised on each additional unit of production causing marginal harms.)

  1. ^

    Of course, we may not be picking a brand at random -- we may be responding to price and quality differences. 

That is some useful information. It seems like what you're saying is that these campaigns really involve three different groups:

(a) the "inner circle" of 10-100 activists that are organizing the campaign,

(b) some larger number of supporters that are waiting in the wings to execute the threatened protests if the original demands aren't met,

(c) the "audience" of the protests - i.e. this is the general public who will be driven away from the target in response to the protests.

And it's really only group (c) that needs to be big enough as a fraction of the target's total business that the target finds it worth listening to.

Are there any good sources that go into more detail about how these kind of campaigns work? (I'm interested in this in general, not just in relation to this specific post)

That's an interesting way to think about it!  Unfortunately this is where the limits of my knowledge about the animal-welfare side of EA kick in, but you could probably find more info about these progest campaigns by searching some animal-welfare-related tags here on the Forum, or going to the sites of groups like Animal Ask or Hive that do ongoing work coordinating the field of animal activists, or by finding articles / podcast interviews with Lewis Bollard, who is the head grantmaker for this stuff at Open Philanthropy / Coefficient Giving, and has been thinking about the strategy of cage-free campaigns and related efforts for a very long time.

(note the science is actually not clear on whether breastmilk is any better than formula; they seems about the same for babies' health! https://parentdata.org/what-the-data-actually-says-about-breastfeeding/ ) ,

The larger concern about formula manufacturers' practices is in LMICs, where the concerns are different than in developed countries. Discussion about increases in mortality associated with unclean water here, for example. There are also other factors that come into play in the LMIC context that aren't in scope for the linked article written from a developed-country perspective. 

This is a good, clear, helpful, informative comment, right up until this last part:

Fun fact: it's actually this same focus on finding causes that are important (potentially large in scale), neglected (not many other people are focused on them) and tractable, that has also led EA to take some "sci-fi doomsday scenarios" like wars between nuclear powers, pandemics, and AI risk, seriously. Consider looking into it sometime -- you might be surprised how plausible and deeply-researched these wacky, laughable, uncool, cringe, "obviously sci-fi" worries really are! (Like that countries might sometimes go to war with each other, or that it might be dangerous to have university labs experimenting with creating deadlier versions of common viruses, or that powerful new technologies might sometimes have risks.)

Nuclear war and pandemics are obviously real risks. Nuclear weapons exist and have been used. The Cold War was a major geopolitical era in recent history. We just lived through covid-19 and there have been pandemics before. The OP specifically only mentioned "some sci-fi doomsday scenarios regarding AI", nothing about nuclear war or pandemics.

The euphemism "powerful new technologies might sometimes have risks" considerably undersells the concept of AI doomsday (or utopia), which is not about the typical risks of new technology but is eschatological and millennialist in scope. New technologies sometimes have risks, but that general concept in no way supports fears of AI doomsday.

As far as I can tell, most AI experts disagree with the view that AGI is likely to be created within the next decade and disagree with the idea that LLMs are likely to scale to AGI. This is entirely unlike the situation with nuclear war or pandemics, where there is much more expert consensus.

I don’t agree that the AI doomsday fears are deeply researched. The more I dive into EA/rationalist/etc. arguments about AGI and AI risk, the more I’m stunned by how unbelievably poorly and shallowly researched most of the arguments are. Many of the people making these arguments seem not to have an accurate grasp of the definitions of important concepts in machine learning, seem not to have considered some of the obvious objections before, make arguments using fake charts with made-up numbers and made-up units, make obviously false and ridiculous claims (e.g. GPT-4 has the general intelligence of a smart high school student), do seat-of-the-pants theorizing about cognitive science and philosophy of mind without any relevant education or knowledge, deny inconvenient facts, jump from merely imagining a scenario to concluding that it’s realistic and likely with little to no evidentiary or argumentative support, treat subjective guesses as data or evidence, and so on. It is some of the worst "scholarship" I have ever encountered in my life. It’s akin to pseudoscience or conspiracy theories — just abysmal, abysmal stuff. The worst irrationality.

The more I raise these topics and invite people to engage with me on them, the worse and worse my impression gets of the "research" behind them. Two years ago, I assumed AGI existential risk discourse was much more rational, thoughtful, and plausible than I do now — that initial impression was from knowing much less than I do now and giving people the benefit of the doubt. I wouldn’t have imagined the ridiculous stuff that gets celebrated as a compelling case would even have been considered acceptable. The errors are so unbelievably bad I’m in disbelief with what people can get away with.

I don’t think it’s fair for you to sneer at the OP for having skepticism about AI doomsday, since their initial reaction is rational and correct, and your defense is, in my opinion, misleading.

I still upvoted this comment, though, since it was mostly helpful and well-argued.

I'll admit to a perhaps overly mean-spirited or exasperated tone in that section, but I think the content itself is good actually(tm)?

I agree with you that LLM tech might not scale to AGI, and thus AGI might not arrive as soon as many hope/fear.  But this doesn't really change the underlying concern??  It seems pretty plausible that, if not in five years, we might get something like AGI within our lifetime via some improved, post-LLM paradigm. (Consider the literal trillions of dollars, and thousands of brilliant researchers, now devoting their utmost efforts towards this goal!)  If this happens, it does not take some kind of galaxy-brained rube-goldberg argument to make an observation like "if we invent a technology that can replace a lot of human labor, that might lead to extreme power concentration of whoever controls the technology / disempowerment of many people who currently work for a living", either via "stable-totalitarianism" style takeovers (people with power use powerful AI to maintain and grow this power very effectively) or via "gradual disempowerment" style concerns (once society no longer depends on a broad base of productive, laboring citizens, there is less incentive to respect those citizens' rights and interests).

Misalignment / AI takeover scenarios are indeed more complicated and rube-goldberg-y IMO.  But the situation here is very different from what it was ten years ago -- instead of just doing Yudkowsky-style theorycrafting based on abstract philosophical principles, we can do experiments to study and demonstrate the types of misalignment we're worried about (see papers by Anthropic and others about sleeper agents, alignment faking, chain-of-thought unfaithfulness, emergent misalignment, and more).  IMO the detailed science being done here is more grounded than the impression you'd get by just reading people slinging takes on twitter (or, indeed, by reading comments like mine here!).  Of course if real AGI turns out to be in a totally new post-LLM paradigm, that might invalidate many of the most concrete safety techniques we've developed so far -- but IMO that makes the situation worse, not better!

In general, the whole concept of dealing with existential risks is that the stakes are so high that we should start thinking ahead and preparing to fight them, even if it's not yet certain they'll occur.  I agree it's not certain that LLMs will scale to AGI, or that humanity will ever invent AGI. But it certainly seems plausible! (Many experts do believe this, even if they are in the minority on that survey.  Plus, like the entire US stock market these days is basically obsessed with figuring out whether AI will turn out to be a huge deal or a nothingburger or something in-between, so the market doesn't consider it an obvious guaranteed-nothingburger.  And of course all the labs are racing to get as close to AGI as fast as possible, since the closer you get to AGI, the more money you can make by automating more and more types of labor!)  So we should probably start worrying now, just like we worry about nuclear war even though it seems hopefully unlikely to me that Putin or Xi Jinping or the USA would really decide to launch a major nuclear attack even in an extreme situation like an invasion of Taiwan.  New technologies sometimes have risks; AI might (not certain, but definitely might) become and EXTREMELY powerful new technology, so the risks might be large!

If the argument were merely that there’s something like a 1 in 10 million chance of a global catastrophic event caused by AGI over the next 100 years and we should devote a small amount of resources to this problem, then you could accept flimsy, hand-wavy arguments. But Eliezer Yudkowsky forecasts a 99.5% chance of human extinction from AGI "well before 2050", unless we implement his aggressive global moratorium on AI R&D. The flimsy support that can justify a small allocation of resources can’t justify a global moratorium on AI R&D, enforced by militaries. (Yudkowsky says that AI datacentres that violate the moratorium should be blown up.)

Yudkowsky is on the extreme end, but not by much. Some people want to go to extreme lengths to stop AI. Pause AI calls for a moratorium similar to what Yudkowsky recommends. The amount of funding and attention to AI existential risk from EA is not a small percentage but a very large one. So, whatever would support a modest, highly precautionary stance toward AI risk does not support what is actually happening.

I’ll take your concern about AI concentrating wealth and power if there is widespread labour automation as an example of what I mean with regard to flimsy evidentiary/argumentative support. Okay, let’s imagine that in, say, 2050, we have humanoid robots that are capable of automating most paid human work that currently exists, both knowledge work and work that has a physical dimension. Let’s suppose the robots are:

  • Built with commodity hardware that’s roughly as expensive as a Vespa or a used car
  • Sold directly to consumers
  • Running free, open source software and free, open source/open weights AI models
  • Programmed to follow the orders of their owner, locked with a password and/or biometric security

Would your concern about wealth and power concentration apply in such a scenario? It’s hard to see how it would. In this scenario, humanoid robots with advanced AI would be akin to personal computers or smartphones. Powerful but so affordable and widely distributed that the rich and powerful hardly have any technological edge over the poor and powerless. (A billionaire uses an iPhone, the President of the United States uses an iPhone, and the cashier at the grocery store uses an iPhone.)

You could also construct a scenario where humanoid robots are extremely expensive, jealously kept by the companies that manufacture them and not sold, run proprietary software and closed models, and obey only the manufacturer’s directives. In that case, power/wealth concentration would be a concern.

So, which scenario is more likely to be true? What is more likely to be the nature of these advanced humanoid robots in 2050?

We have no idea. There is simply no way for us to know, and as much as we might want to know, be desperate to know, twist ourselves in knots trying to work it out, we won’t get any closer to the truth than when we started. The uncertainty is irreducible.

Okay, so let’s accept we don’t know. Shouldn’t we prepare for the latter scenario, just in case? Maybe. How?

Coming up with plausible interventions or preparations at this early stage is hopeless. We don’t know which robot parts need to be made cheaper. The companies that will make the robots probably don’t exist yet. Promoting open source software or open AI models in general today won’t stop any company in the future from using proprietary software and closed models. Even if we passed a law now mandating all AI and robotics companies had to use open source software and open models — would we really want to do that, based on a hunch? — that law could easily be repealed in the future.

Plus, I made the possibility space artificially small. I made things really simple by presenting a binary choice two scenarios. In reality, there is a combinatorial explosion of possible permutations of different technical, design, and business factors involved. Most likely including ones we can’t imagine now and that, if we were shown a Wikipedia article from the future describing one, we still wouldn’t understand it. So, there is a vast space of possibilities based on what we can already imagine, and there will probably be even more possibilities based on new technology and new science that we can’t yet grasp.

Saying "prepare now" sounds sensible and precautionary, but it’s not actionable.

Also, fundamental question: why is preparing earlier better? Let’s say in 2025 humanoid robots account for 0% of GDP (this seems true), in 2030 they’ll account for 1%, in 2040 for 25%, and in 2050 for 50%. What do we gain by trying to prepare while humanoid robots are at 0% of GDP? Once they’re at 1% of GDP, or even 0.5%, or 0.25%, we’ll have a lot more information than we do now. I imagine that 6 months spent studying the problem while the robots are at 1% of GDP will be worth much more than 5 years of research at the 0% level.

Perhaps a good analogy is scientific experiments. The value of doing theory or generating hypotheses in the absence of any experiments or observations — in the absence of any data, in other words — is minimal. For the sake of illustration, let’s imagine you’re curious about how new synthetic drugs analogous to LSD but chemically unlike any existing drugs — not like any known molecules at all — might affect the human mind. Could they make people smarter temporarily? Or perhaps cognitively impaired? Could they make people more altruistic and cooperative? Or perhaps paranoid and distrustful?

You have no data in this scenario: you can’t synthesize molecules, you can’t run simulations, there are no existing comparisons, natural or synthetic, and you certainly can’t test anything on humans or animals. All you can do is think about it.

Would time spent in this pre-empirical state of science (if it can be called science) have any value? Let's say you were in that state for... 50 years... 100 years... 500 years... 1,000 years... would you learn anything? Would you gain any understanding? Would you get any closer to truth? I think you wouldn't, or you would so marginally that it wouldn't matter.

Then if you suddenly had data, if you could synthesize molecules, run simulations, and test drugs on live subjects, in a very short amount of time you would outstrip, many times over, whatever little knowledge you might have gained from just theorizing and hypothesizing about it. A year of experiments would be worth more than a century of thought. So, if for some reason, you knew you couldn't start experiments for another hundred years, there would be very little value in thinking about the topic before then.

The whole AGI safety/alignment and AGI preparedness conversation seems to rely on the premise that non-empirical/pre-empirical science is possible, realistic, and valuable, and that if we, say, spend $10 million of grant money on it, it will have higher expected value than giving it to GiveWell's top charities, or pandemic preparedness, or asteroid defense, or cancer research, or ice cream cones for kids at county fairs, or whatever else. I don't see how this could be true. I don't see how this can be justified. It seems like you basically might as well light the money on fire.

Empirical safety/alignment research on LLMs might have value if LLMs scale to AGI, but that's a pretty big 'if'. For over 15 years, up until — I'm not sure, maybe around 2016? — Yudkowsky and MIRI still thought symbolic AI would lead to AGI in the not-too-distant future. In retrospect, that looks extremely silly. (Actually, I thought it looked extremely silly at the time, and said so, and also got pushback from people in EA way back then too. Plus ça change! Maybe in 2035 we'll be back here again.) The idea that symbolic AI could ever lead to AGI, even in 1,000 years, just looks unbelievably quaint where you compare symbolic AI systems to a system like AlphaGo, AlphaStar, or ChatGPT. Deep learning/deep RL-based systems still have quite rudimentary capabilities compared to the average human being, or, in some important ways, even compared to, say, a cat, and when you compare how much simpler and how much less capable symbolic AI systems are to these deep neural network-based systems, it's ridiculous. Symbolic AI is not too different from conventional software, and the claim that symbolic AI would someday soon ascend to AGI feels not too different from the claim that, in the not-too-distant future, Microsoft Windows will learn how to think. The connection between symbolic AI and human general intelligence seems to boil down to, essentially, a loose metaphorical comparison between software/computers and human brains.

I don't think the conflation of LLMs with human general intelligence is quite as ridiculous as it was with symbolic AI, but it is still quite ridiculous. Particularly when people make absurd and plainly false claims that GPT-4 is AGI (as Leopold Aschenbrenner did) or o3 is AGI (as Tyler Cowen did), or that GPT-4 is a "very weak AGI" (as Will MacAskill did). This seems akin to saying a hot air balloon is a spaceship, or a dog is a bicycle. It's hard to even know what to say.

As for explicitly, substantively making the argument about why LLMs won't scale to AGI, there are two distinct and independent arguments. The first argument involves pointing out the limits to LLM scaling. The second argument involves pointing out the fundamental research problems that scaling can't solve.

I used to assume that people who care a lot about AGI alignment/safety as an urgent priority must have thoughtful replies to these sorts of arguments. Increasingly, I get the impression that most of those people have simply never thought about them before, and weren't even aware such arguments existed.

There is a big difference between veganism and most(?) other boycott campaigns. Every time you purchase an animal product then you are causing significant direct harm (in expectation, if you accept the vegan argument). This is because if demand for animal products increases by 1, then we should expect some fraction more of that product to be produced to meet that demand, on average (the particular fraction depending on price elasticity, since you also raise prices a bit which puts other consumers off).

A lot of other boycott campaigns aren't like this. For example, take the boycott of products which have been tested on animals. Here you don't do direct harm with each purchase in the same way (or at least if you do, it is probably orders of magnitude less). Instead, the motivation is that if enough people start acting like this, it will lead to policy change.

In the first case, it doesn't matter if no one else in the world agrees with you, participating in the boycott can still do significant good. In the second case, a large number of people are required in order for it to have meaningful impact. It makes sense that impact minded EAs are more inclined to support a boycott of the first kind.

I think a lot of your examples probably fall under the second kind (though not all). And I think that's a big part of the answer to your question. Also, for at least some of the ones in the first kind, I think most EAs probably just disagree with the fundamental argument. For example, the environmental impact of using LLMs isn't actually that bad: https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about.

To clarify my position, I am fairly confident that the consumption of chocolate produced through slave labor follows a straightforward supply-and-demand pattern: increased consumer demand leads to increased production, which in turn requires additional exploited laborers. In the same way, it is commonly stated that producing one liter of Coca-Cola requires approximately two liters of water. If Coca-Cola sources this water from communities already facing scarcity, then purchasing a two-liter bottle could be understood as indirectly contributing to the extraction of four liters of water from a community that may urgently need it.

With that in mind, I am interested in whether there are other common, everyday behaviors—analogous to veganism or the examples above—where an individual’s consumption reliably results in a direct negative impact. If so, are these harms measurable in any meaningful way? And if they are not easily quantifiable, should we treat them as negligible or morally permissible in order to avoid the implication that one must adopt an ascetic lifestyle simply to remain ethically consistent?

Grok and other large language models developed by xAI are also subject to boycott discussions, mainly due to environmental concerns. Training and operating advanced AI models consumes enormous amounts of electricity and water, and requires large data centers that generate substantial carbon emissions. Some researchers estimate that training a single cutting-edge model can emit as much carbon as several cars do over their entire lifetimes. For this reason, some people avoid using Grok or refuse to financially support xAI, believing that doing so reduces demand for energy-intensive AI systems that contribute to climate damage.

I am being pedantic here but this is mostly misleading. @Andy Masley has done great work on this already. Please check: https://andymasley.substack.com/p/ai-and-the-environment

I am unsure what to think, as I often encounter conflicting information online. Given the rapid pace of technological advancement, I am also concerned that the environmental impact of AI may increase over time. In particular, image and video generation appear to require significantly more computational resources than text generation. If these modalities become more widely used in the future, especially at large scale, AI systems may have a substantially greater environmental footprint.

Additionally, it is worth noting that Grok may differ from other AI models in this regard, as Elon Musk has made several decisions about its development and deployment that appear environmentally questionable in ways other companies have generally avoided.

In conclusion, what exactly am I expected to boycott to be considered an effective altruist, and what freedoms am still mine to enjoy? 

It's worth noting that in a recent survey of EAs, respondents were "generally evenly split across dietary categories, with vegans at 25.5%, omnivores at 25.4%, reducetarians/flexitarians at 24.1%, and vegetarians at 20.3%." So I don't think that suggests one is quite expected to be vegan (or even non-omnivore) to be considered an EA.

In general, you've identified three reasons someone might want to boycott a company or industry:

  • They may feel the company doesn't align with their values: e.g., "purchasing Coca-Cola products feels like indirectly supporting policies they find unjust."  That's fine, but this specific motivation is not about creating change in the world per se (although it can be mixed with other reasons). It's closer to sustaining what one perceives as a personal ethical obligation. That's really outside the scope of what EA focuses onand can safely be left for individual action as appropriate.
  • They may be trying to pressure the company to change its actions. Now, we are talking about action that may be potentially effective. But how good is the evidence that these particular boycotts are effective in changing corporate behavior? And how much more likely would participation by a few thousand EAs be in these boycotts succeeding or failing?
    • On the other side of the ledger, there are some real costs to engaging in this sort of boycott. It would be necessary to expend community resources to decide which corporations were behaving badly enough to potentially warrant a boycott, and which boycotts would be potentially effective for EAs to engage in. That would pull attention away from other things to some extent. Community members whose preferred causes were not selected for community action might feel miffed. Some of this stuff (e.g., the BDS movement) are extremely controversial and would risk fracturing the community.
    • In the end, there are other communities who work on these issues, and EAs who think boycotting is potentially worthwhile can certainly refer to those communities' work.
  • Finally, they may be trying to reduce the harm caused by the activity. I think you're right to say that some boycotts employ this theory of action. However, the connection between the consumer activity and the harm in question is stronger with consumption of animal products than with some of your examples. In other cases, community members would probably say that the harms caused by each individual consumer's actions are different in magnitude.
    • That being said, there are probably some historical and/or idiosyncratic reasons behind the attention paid to individual dietary change, which people who were EAs earlier in the movement could address better than I.
    • There could also be some strategic justification for paying attention to individual dietary issues, given that animal welfare is a major cause area based on the usual criteria. Out of the universe of possible consumer-related actions, it's reasonable for advocates to focus on the ones that best supplement their day jobs. A bunch of omnivores may find it difficult to work effectively with the broader animal-welfare movement, or to get the public to take them seriously on animal-welfare issues. Moreover, eating animals could plausibly result in cognitive dissonance that inhibits one's ability to think optimally about animal-related issues.

When I referred to boycotting Nestlé and Coca-Cola, my primary focus was on the basic dynamics of supply and demand. If consumers continue to purchase chocolate produced with slave labor, increased demand will logically require more exploited laborers and more total labor to meet that demand. The underlying principle seems similar to the reasoning behind veganism: purchasing animal products contributes, at least marginally, to the continued production of those products. Please correct me if I am mistaken in drawing this parallel. Likewise, it is often stated that producing one liter of Coca-Cola requires approximately two liters of water. If Coca-Cola’s operations reduce water availability in communities that already struggle with access, it seems reasonable to ask whether consumers bear some indirect responsibility—e.g., if a person buys a two-liter bottle of Coca-Cola, does that effectively correspond to four liters of water extracted from a community that may have needed it for agriculture or drinking?

However, I am interested in your view on which kinds of actions should be considered morally permissible and which should be regarded as morally obligatory. I do not believe we should, as some critiques phrase it, adopt “the life goals of dead people” and simply attempt to avoid all entanglement with harm, yet I also find it notable that issues such as widespread contempt for Nestlé or the extensive discussions about ethical and fair-trade chocolate seem largely overlooked in this forum. This is surprising given how readily veganism is embraced. I am not attempting to diminish the moral weight of animal suffering, but I do sometimes worry that it is invoked in a way that unintentionally marginalizes concerns about human suffering.

I also find it striking that, according to the statistics shared, a substantial portion of the EA community is neither vegetarian nor vegan. This raises questions about the criteria by which individuals consider themselves part of the effective altruism movement, although I recognize there are no strict requirements or definitive rules—ultimately, many of these norms function more like Schelling points, such as the commonly referenced 10% donation pledge.

The underlying principle seems similar to the reasoning behind veganism: purchasing animal products contributes, at least marginally, to the continued production of those products. Please correct me if I am mistaken in drawing this parallel. 

There is likely similarity in at least some cases, but it may be attenuated.

Given that Nestle presumably has significant fixed costs, it is unclear what the effect of a small group of consumers boycotting would be. You presumably have Nestle and non-Nestle chocolate. Most consumers are indifferent between the two, while some consumers refuse to buy the Nestle product. If some consumers start refusing the Nestle product, it has two choices. One, it can cut back production. Two, it can sell its "excess" production to consumers who don't care where their chocolate comes from (or to other companies which then sell cocoa-based products to consumers). I don't have a good sense of what the actual reduction in Nestle's production of a consumer boycotting Nestle is, but I suspect it is far from a 1:1 reduction.

(If you're wondering whether this logic also applies to vegan consumption decisions -- yes, it does to some extent even though the consumer is reducing industry-wide demand. For instance, reducing one's consumption of chicken by 1 pound is expected to reduce chicken production by about 0.76 pounds.)

My understanding is that soda is produced locally, so I'd be looking for evidence that my Coca-Cola consumption counterfactually affected water availability outside my community (which does not, to my knowledge, experience water shortages). I'd also have to consider the adverse effects of what I drank instead -- maybe drinking Pepsi has less negative environmental effect, but it's not plausible that it has no effect. And switching to quality apple juice (not from concentrate) might be worse, due to the distances the juice would travel.

Figuring out the actual impact of these boycotts would require a lot of modeling (and perhaps some data that would be difficult to obtain). But I suspect the actual impact is significantly less than many people participating in the boycott assume.

However, I am interested in your view on which kinds of actions should be considered morally permissible and which should be regarded as morally obligatory.

I don't know. Philosophers have spilled much ink on that question, and I've never found an answer I find satisfying.

I also find it notable that issues such as widespread contempt for Nestlé or the extensive discussions about ethical and fair-trade chocolate seem largely overlooked in this forum. 

That they are in widespread circulation is arguably a good reason to not focus on them here! I knew fair-trade chocolate is a thing, and I suspect most readers here did too. We know where to go for information on the subject if we want to change our consumption patterns. But information about shrimp welfare, there aren't too many other places to hear about that.

Attention is a limited resource, and there is much evil and suffering in the world. All altruistically-minded communities pick and choose what their focus issues and methods are. They would collapse from incoherence otherwise. Individuals are likewise constrained lest they experience burnout. 

Moreover, giving focus to things that aren't very effective would be mildly corrosive to the spirit of EA. In part, the movement grew out of a belief that traditional charities and donors can put far too much stock in what makes the charity/donor feel righteous/ethical (irrespective of actual magnitude of results), or what gives the charity/donor higher status in society. I do think it is important to maintain strong boundaries against those impulses.

In the end, my own lightly-held assumption is that most boycotts of the sort you describe are not effective enough in producing significant enough changes in production to be worth diverting community attention from more effective actions. The lack of attention to them here suggests most of the user base would agree. But the beauty of the Forum is that you can run the numbers and present a model explaining why that tentative view is incorrect. 

IMO the real answer is that veganism is not an essential part of EA philosophy, just happens to be correlated with it due to the large number of people in animal advocacy. Most EA vegans and non-vegans think that their diet is a small portion of their impact compared to their career, and it's not even close! Every time you spend an extra $5 finding a restaurant with a vegan option you could help 5,000 shrimp instead. Vegans have other reasons like non-consequentialist ethics, virtue signaling or self-signaling, or just a desire not to eat the actual flesh/body fluids of tortured animals.

If you have a similar emotional reaction to other products it seems completely valid to boycott them, although as you mention there can be significant practical burdens, both in adjusting one's lifestyle to avoid such products and in judging whether the claims of marginal impact are valid. Being vegan is not obligatory in my culture and neither should boycotts be-- unless the marginal impact of the boycott is larger than any other life choice which is essentially never true.

Curated and popular this week
Relevant opportunities