Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3756 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
366

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

Reposting here a recent comment of mine listing socialist-adjacent ideas that at least I personally am a lot more excited about than socialism itself.
 
* * *

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Reposting here a recent comment of mine listing socialist-adjacent ideas that at least I personally am a lot more excited about than socialism itself.
 
* * *

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).  Check it out; it might scratch an itch for "something like socialism, but that might actually work".

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Reposting a comment of mine from another user's similar post about "I used to be socialist, but now have seen the light"!
 
* * *

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).  Check it out; it might scratch an itch for "something like socialism, but that might actually work".

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Left-libertarian EA here -- I'll always upvote posts along the lines of "I used to be socialist, but now have seen the light"!

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).  Check it out; it might scratch an itch for "something like socialism, but that might actually work".

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Agreed that it's a weird mood, but perhaps inevitable.

In terms of the inequality between running PR campaigns but "not interesting cooprating with other people's altruistic PR campaigns": insofar as attention is ultimately a fixed resource, it's an intrinsically adversarial situation between different attempts to capture peoples' attention.  (Although there are senses in which this is not true -- many causes are often bundled together in a political alliance.  And there could even be a broader cultural shift towards people caring more about behaving ethically, which would perhaps "lift all boats" in the do-gooder PR-campaign space!)  Nevertheless, given the mostly fixed supply of attention, it certainly seems fine to steal eyeballs for thoughtful, highly-effective causes that would otherwise be watching Tiktok, and it seems similarly fine to steal eyeballs for good causes that would otherwise have gone to dumb, counterproductive causes (like the great paper-straw crusade).  After that, it seems increasingly lamentable to steal eyeballs from increasingly reasonably-worthy causes, until you get to the level of counterproductive infighting among people who are all trying hard to make the world a better place.  Of course, this is complicated by the fact that everyone naturally thinks their own cause is worthier than others.  Nevertheless, I think some causes are worthier than others, and fighting to direct attention towards the worthiest causes is a virtuous thing to do -- perhaps even doing one's civic duty as a participant in the "marketplace of ideas".

In terms of the inequality between organizers (who are being high-impact only because others are low impact) vs consumers whose behavior is affected:

  • This is omnipresent everywhere in EA, right?  Mitigating x-risks is only high-impact because the rest of the world is neglecting it so badly!
  • Are we cruelly "stealing their impact"?  I mean, maybe??  But this doesn't seem so bad, because other people don't care as much about impact.  Conversely, some causes are much better than EA at going viral and raising lots of shallow mass awareness -- but this isn't so terrible from EA's perspective, because EA doesn't care as much about going viral.
  • But talk of "stealing impact" is weird and inverted... Imagine if everyone turned EA and tried to do the most high-impact thing.  In this world, it might harder to have very high impact, but this would hardly be cause for despair, because the actual world would be immensely better off!  It seems perverse to care about imagined "impact-stealing" rather than the actual state of the world.
  • It also seems like a fair deal insofar as the organizers have thought carefully and worked hard (a big effort), while it's not like the consumers are being coerced into doing menial low-impact gruntwork for long hours and low pay; they're instead making a tiny, nearly unconscious choice between two very similar options.  In a way, the consumers are doing marginal charity, so their impact is higher than it seems.  But asking people to go beyond marginal charity and make costlier sacrifices (ie, join a formal boycott, or consciously keep track of long lists of which companies are good versus bad) seems like more of an imposition.

Re: Nestle in particular, I get the spirit of what you're saying, although see my recent long comment where I try to think through the chocolate issue in more detail.  As far as I can tell, the labor-exploitation problems are common to the entire industry, so switching from Nestle to another brand wouldn't do anything to help??  (If anything, possibly you should be switching TOWARDS nestle, and away from companies like Hershey's that get a much higher % of their total revenue from chocolate?)

I think this spot-check about Nestle vs cocoa child labor (and about Nestle vs drought, and so forth) illustrates my point that there are a lot of seemingly-altruistic PR campaigns that actually don't do much good.  Perhaps those PR campaigns should feel bad for recruiting so much attention only to waste it on a poorly-thought-out theory of impact!

Hi; thanks for this thoughtful reply!

I agree that with chocolate and exploited labor, the situation is similar to veganism insofar as if you buy some chocolate, then (via the mechanisms of supply and demand) that means more chocolate is gonna be harvested (although not necessarily by harvested by that particular company, right? so I think the argument works best only if the entire field of chocolate production is shot through with exploited labor?).  Although, as Toby Chrisford points out in his comment, not all boycott campaigns are like this.

Thoughts on chocolate in particular

Reading the wikipedia page for chocolate & child labor, I agree that this seems like a more legit cause than "water privatization" or some of the other things I picked on.  But if you are aiming for a veganism-style impact through supply and demand, it makes more sense to boycott chocolate in general, not a specific company that happens to make chocolate.  (Perplexity says that Nestle controls only a single-digit percentage of the world's chocolate market, "while the vast majority is produced by other companies such as Mars, Mondelez, Ferrero, and Hershey" -- nor is Nestle even properly described as a chocolate company, since only about 15% of their revenue comes from chocolate!  More comes from coffee, other beverages, and random other foods.)

In general I just get the feeling that you are choosing what to focus on based on which companies have encountered "major controversies" (ie charismatic news stories), rather than making an attempt to be scope-sensitive or thinks strategically.

"With something like slave labor in the chocolate supply chain, the impact of an individual purchase is very hard to quantify."

Challenge accepted!!!  Here are some random fermi calculations that I did to help me get a sense of scale on various things:

  • Google says that the average american consumes 100 lbs of chicken a year, and broiler chickens produce about 4 lbs of meat, so that's 25 broiler chickens per year.  Broiler chickens only live for around 8 weeks, so 25 chickens = at any given time, about four broiler chickens are living in misery in a factory farm, per year, per american.  Toss in 1 egg-laying hen to produce about 1 egg per day, that's five chickens per american.
    • How bad is chicken suffering?  Idk, not that bad IMO, chickens are pretty simple.  But I'm not a consciousness scientist (and sadly, nor is anybody else), so who knows!
  • Meanwhile with chocolate, the average american apparently consumes about 15 pounds of chocolate per year.  (Wow, that's a lot, but apparently europeans eat even more??) The total worldwide market for chocolate is 16 billion pounds per year.  Wikipedia says that around 2 million children are involved in child-labor for harvesting cocoa in West Africa, while Perplexity (citing this article) estimates that "Including farmers’ families, workers in transport, trading, processing, manufacturing, marketing, and retail, roughly 40–50 million people worldwide are estimated to depend on the cocoa and chocolate supply chain for their income or employment."
    • So the average American's share of global consumption (15 / 16 billion, or about 1 billionth) is supporting the child labor of 2 million / 1 billion = 0.002 West African children.  Or, another way of thinking about this is that (assuming child laborers work 12-hour days every day of the year, which is probably wrong but idk), the average American's yearly chocolate consumption supports about 9 hours of child labor, plus about 180 hours of labor from all the adults involved in "transport, trading processing, manufacturing, marketing, and retail", who are hopefully mostly all legitly-employed.
  • Sometimes for a snack, I make myself a little bowl of mixed nuts + dark chocolate chips + blueberries.  I buy these little 0.6-pound bags of dark chocolate chips for $4.29 at the grocery store (which is about as cheap as it's possible to buy chocolate); each one will typically last me a couple months.  It's REALLY dark chocolate, 72% cacao, so maybe in terms of child-labor-intensity, that's equivalent to 4x as much normal milk chocolate, so child-labor-equivalent to like 2.5 lbs of milk chocolate?  So each of these bags of dark chocolate involves about 1.5 hours of child labor.
    • The bags cost $4.29, but there is significant consumer surplus involved (otherwise I wouldn't buy them!)  Indeed, I'd probably buy them even if they cost twice as much!  So let's say that the cost of my significantly ycutting back my chocolate consumption is about $9 per bag.
    • So if I wanted to reduce child labor, I can buy 1 hour of a child's freedom at a rate of about $9 per bag / 1.5 hours per bag = $6 per hour.  (Obviously I can only buy a couple hours this way, because then my chocolate consumption would hit zero and I can't reduce any more.)
      • That's kind of expensive, actually!  I only value my own time at around $20 - $30 per hour!
      • And it looks doubly expensive when you consider that givewell top charities can save an african child's LIFE for about $5000 in donations -- assuming 50 years life expectancy and 16 hours awake a day, that's almost 300,000 hours of being alive versus dead.   Meanwhile, if me and a bunch of my friends all decided to take the hit to our lifestyle in the form of foregone chocolate consumption instead of antimalarial bednet donations, that would only free up something like 833 hours of an african child doing leisure versus labor (which IMO seems less dramatic than being alive versus dead).
      • One could imagine taking a somewhat absurd "offsetting" approach, by continuing to enjoy my chocolate but donating 3 cents to Against Malaria Foundation for each bag of chocolate I buy -- therefore creating 1.8 hours of untimely death --> life in expectation, for every 1.5 hours of child labor I incur.

Sorry to be "that guy", but is child labor even bad in this context?  Is it bad enough to offset the fact that trading with poor nations is generally good?

  • Obviously it's bad for children (or for that matter, anyone), who ought to be enjoying their lives and working to fulfill their human potential, to be stuck doing tedious, dangerous work. But, it's also bad to be poor!
  • Most child labor doesn't seem to be slavery -- the same wikipedia page that cites 2 million child laborers says there are estimated to be only 15,000 child slaves. (And that number includes not just cocoa, but also cotton and coffee.)  So, most of it is more normal, compensated labor. (Albeit incredibly poorly compensated by rich-world standards -- but that's everything in rural west africa!)
  • By analogy with classic arguments like "sweatshops are good actually, because they are an important first step on the ladder of economic development, and they are often a better option for poor people than their realistic alternatives, like low-productivity agricultural work", or the infamous Larry Summers controversy (no, not that one, the other one.  no, the OTHER other one.  no, not that one either...) about an IMF memo speculating about how it would be a win-win situation for developed countries to "export more pollution" to poorer nations, doing the economic transaction whereby I buy chocolate and it supports economic activity in west africa (an industry employing 40 million people, only 2 million of whom are child laborers) seems like it might be better than not doing it.  So, the case for a personal boycott of chocolate seems weaker than a personal boycott of factory-farmed meat (where many of the workers are in the USA, which has much higher wages and much tighter / hotter labor markets).

"I am genuinely curious about what you consider to fall within the realm of morally permissible personal actions."

This probably won't be a very helpful response, but for what it's worth:

  • I don't think the language of moral obligations and permissibility and rules (what people call "deontology") is a very good way to think about these issues of diffuse, collective, indirect harms like factory farming or labor exploitation.
    • As you are experiencing, deontology doesn't offer much guidance on where to draw the line when it comes to increasingly minor, indirect, or incidental harms.
    • It's also not clear what to do when there are conflicting effects at play -- if an action is good for some reasons but also bad for other reasons.
    • Deontology doesn't feel very scope-sensitive -- it just says something like "don't eat chocolate if child labor is involved!!" and nevermind if the industry is 100% child labor or 0.01% child labor.  This kind of thinking seems to have a tendency to do the "copenhagen theory of ethics" thing where you just pile on more and more rules in an attempt to avoid being entangled with bad things, when instead it should be more concerned with identifying the most important bad things and figuring out how to spend extra energy addressing those, even while letting some more minor goals slide.
  • I think utilitarianism / consequentialism is a better way to think about diffuse, indirect harms, because it's more scope-sensitive and it seems to allow for more grey areas and nuance. (Deontology just says that you must do some things and mustn't do other forbidden things, and is neutral on everything else.  But consequentialism rates actions on a spectrum from super-great to super-evil, with lots of medium shades in-between.)  It's also better at balancing conflicting effects -- just add them all up!
  • Of course, trying to live ordinary daily life according to 100% utilitarian thinking and ethics feels just as crazy as trying to live life according to 100% deontological thinking.  Virtue ethics often seems like a better guide to the majority of normal daily-life decisionmaking: try to behave honorably, try to be be caring and prudent and et cetera, doing your best to cultivate and apply whatever virtues seem most relevant to the situation at hand.
  • Personally, although I philosophically identify as a pretty consequentialist EA, in real life I (and, I think, many people) rely on kind of a mushy combination of ethical frameworks, trying to apply each framework to the area where it's strongest.
    • As I see it, that's virtue ethics for most of ordinary life -- my social interactions, how I try to motivate myself to work and stay healthy, what kind of person I aim to be.
    • And I try to use consequentialist / utilitarian thinking to figure out "what are some of the MOST impactful things I could be doing, to do the MOST good in the world".  I don't devote 100% of my efforts to doing this stuff (I am pretty selfish and lazy, like to have plenty of time to play videogames, etc), but I figure if I spend even a smallish fraction of my time (like 20%) aimed at doing whatever I think is the most morally-good thing I could possibly do, then I will accomplish a lot of good while sacrificing only a little.  (In practice, the main way this has played out in my actual life is that I left my career in aerospace engineering in favor of nowadays doing a bunch of part-time contracting to help various EA organizations with writing projects, recruiting, and other random stuff.  I work a lot less hard in EA than I did as an aerospace engineer -- like I said, I'm pretty lazy, plus I now have a toddler to take care of.)
    • I view deontological thinking as most powerful as a coordination mechanism for society to enforce standards of moral behavior.  So instead of constantly dreaming up new personal moral rules for myself (although like everybody I have a few idiosyncratic personal rules that I try to stick to), I try to uphold the standards of moral behavior that are broadly shared by my society.  This means stuff like not breaking the law (except for weird situations where the law is clearly unjust), but also more unspoken-moral-obligation stuff like supporting family members, plus a bunch of kantian-logic stuff like respecting norms, not littering, etc (ie, if it would be bad if everyone did X, then I shouldn't do X).
      • But when it comes to pushing for new moral norms (like many of the proposed boycott ideas) rather than respecting existing moral norms, I'm less enthusiastic.  I do often try to be helpful towards these efforts on the margin, since "marginal charity" is cheap.  (At least I do this when the new norm seems actually-good, and isn't crazy virtue-signaling spirals like for example the paper-straws thing, or counterproductive in other ways like just sapping attention from more important endeavors or distracting from the real cause of a problem.)  But it usually doesn't seem "morally obligatory" (ie, in my view of how to use deontology, "very important for preserving the moral fabric of society and societal trust") to go to great lengths to push super-hard for the proposed new norms.  Nor does it usually seem like the most important thing I could be doing.  So beyond a token, marginal level of support for new norms that seem nice, I usually choose to focus my "deliberately trying to be a good person" effort on trying to do whatever is the most important thing I could be doing!

Thoughts on Longtermism

I think your final paragraph is mixing up two things that are actually separate:

1. "I'm not denying [that x-risks are important] but these seem like issues far beyond the influence of any individual person. They are mainly the domain of governments, policymakers... [not] individual actions."

2. "By contrast, donating to save kids from malaria or starvation has clear, measurable, immediate effects on saving lives."

I agree with your second point that sadly, longtermism lacks clear, measurable, immediate effects.  Even if you worked very hard and got very lucky and accomplished something that /seems/ like it should be obviously great from a longtermist perspective (like, say, establishing stronger "red phone"-style nuclear hotline links between the US and Chinese governments), there's still a lot of uncertainty about whether this thing you did (which maybe is great "in expectation") will actually end up being useful (maybe the US and China never get close to fighting a nuclear war, nobody ever uses the hotline, so all the effort was for naught)!  Even in situations where we can say in retrospect that various actions were clearly very helpful, it's hard to say exactly HOW helpful.  Everything feels much more mushy and inexact.

Longtermists do have some attempted comebacks to this philosophical objection, mostly along the lines of "well, your near-term charity, and indeed all your actions, also affect the far future in unpredictable ways, and the far future seems really important, so you can't really escape thinking about it".  But also, on a much more practical level, I'm very sympathetic to your concern that it's much harder to figure out where to actually donate money to make AI safety go well than to improve the lives of people living in poor countries or help animals or whatever else -- the hoped-for paths to impact in AI are so much more abstract and complicated, one would have to do a lot more work to understand them well, and even after doing all that work you might STILL not feel very confident that you've made a good decision.  This very situation is probably the reason why I myself (even though I know a ton about some of these areas!!) haven't made more donations to longtermist cause areas.

But I disagree with your first point, that it's beyond the power of individuals to influence x-risks or do other things to make the long-term future go well, rather it's up to governments. And I'm not just talking about individual crazy stories like that one time when Stanislav Petrov might possibly have saved the world from nuclear war.  I think ordinary people can contribute in a variety of reasonably accessible ways:

  • I think it's useful just to talk more widely about some of the neglected, weird areas that EA works on -- stuff like the risk of power concentration from AI,  the idea of "gradual disempowerment" over time, topics like wild animal suffering, the potential for stuff like prediction markets and reforms like approval voting to improve the decisionmaking of our political institutions, et cetera.  I personally think this stuff is interesting and cool, but I also think it's societally beneficial to spread the word about it.  Bentham's Bulldog is, I think, an inspiring recent example of somebody just posting on the internet as a path to having a big impact, by effectively raising awareness of a ton of weird EA ideas.
  • If you're just like "man, this x-risk stuff is so fricking confusing and disorienting, but it does seem like in general the EA community has been making an outsized positive contribution to the world's preparedness for x-risks", then there are ways to support the EA community broadly (or other similar groups that you think are doing good) -- either through donations, or potentially through, like, hosting a local EA meetups, or (as I do) trying to make a career out of helping random EA orgs with work they need to get done.
  • Some potential EA cause areas are niche enough that it's possible to contribute real intellectual progress by, again, just kinda learning more about a topic where you maybe bring some special expertise or unique perspective to an area, and posting your own thoughts / research on a topic.  Your own post (even though I disagree with it) is a good example of this, as are so many of the posts on the Forum!  Another example that I know well is the "EcoResilience Initiative", a little volunteer part-time research project / hobby run by my wife @Tandena Wagner -- she's just out there trying to figure out what it means to apply EA-style principles (like prioritizing causes by importance, neglectedness, and tractability) to traditional environmental-conservation goals like avoiding species extinctions.  Almost nobody else is doing this, so she has been able to produce some unique, reasonably interesting analysis just by sort of... sitting down and trying to think things through!

Now, you might reasonably object: "Sure, those things sound like they could be helpful as opposed to harmful, but what happened to the focus on helping the MOST you possibly can!  If you are so eager to criticize the idea of giving up chocolate in favor of the hugely more-effective tactic of just donating some money to givewell top charities, then why don't you also give up this speculative longtermist blogging and instead try to earn more money to donate to GiveWell?!"  This is totally fair and sympathetic.  In response I would say:

  • Personally I am indeed convinced by the (admittedly weird and somewhat "fanatical") argument that humanity's long-term future is potentially very, very important, so even a small uncertain effect on high-leverage longtermist topics might be worth a lot more than it seems.
    • I also have some personal confidence that some of the random, very-indirect-path-to-impact stuff that I get up to, is indeed having some positive effects on people and isn't just disappearing into the void.  But it's hard to communicate what gives me that confidence, because the positive effects are kind of illegible and diffuse rather than easily objectively measurable.
    • I also happen to be in a life situation where I have a pretty good personal fit for engaging a lot with longtermism -- I happen to find the ideas really fascinating, have enough flexibility that I can afford to do weird part-time remote work for EA organizations instead of remaining in a normal job like my former aerospace career, et cetera.  I certainly would not advise any random person on the street to quit their job and try to start an AI Safety substack or something!!
  • I do think it's good (at least for my own sanity) to stay at least a little grounded and make some donations to more straightforward neartermist stuff, rather than just spending all my time and effort on abstract longtermist ideas, even if I think the longtermist stuff is probably way better.

Overall, rather than the strong and precise claim that "you should definitely do longtermism, it's 10,000x more important than anything else", I'd rather make the weaker, broader claims that "you shouldn't just dismiss longtermism out of hand; there is plausibly some very good stuff here" and that "regardless of what you think of longermism, I think you should definitely try to adopt more of an EA-style mindset in terms of being scope-sensitive and seeking out what problems seem most important/tractable/neglected, rather than seeing things too much through a framework of moral obligations and personal sacrifice, or being unduly influenced by whatever controversies or moral outrages are popular / getting the most news coverage / etc."

That's an interesting way to think about it!  Unfortunately this is where the limits of my knowledge about the animal-welfare side of EA kick in, but you could probably find more info about these progest campaigns by searching some animal-welfare-related tags here on the Forum, or going to the sites of groups like Animal Ask or Hive that do ongoing work coordinating the field of animal activists, or by finding articles / podcast interviews with Lewis Bollard, who is the head grantmaker for this stuff at Open Philanthropy / Coefficient Giving, and has been thinking about the strategy of cage-free campaigns and related efforts for a very long time.

I'm not an expert about this, but my impression (from articles like this: https://coefficientgiving.org/research/why-are-the-us-corporate-cage-free-campaigns-succeeding/ , and websites like Animal Ask) is that the standard EA-style corporate campaign involves:

  • a relatively small number of organized activists (maybe, like, 10 - 100, not tens of thousands)...
  • ...asking a corporation to commit to some relatively cheap, achievable set of reforms (like switching their chickens to larger cages or going cage-free, not like "you should all quit killing chickens and start a new company devoted to ecological restoration")
  • ...while also credibly threatening to launch a campaign of protests if the corporation refuses
  • Then rinse & repeat for additional corporations / additional incremental reforms (while also keeping an eye out to make sure that earlier promises actually get implemented).

My impression is that this works because the corporations decide that it's less costly for them to implement the specific, limited, welfare-enhancing "ask" than to endure the reputational damage caused by a big public protest campaign.  The efficacy doesn't depend at all on a threat of boycott by the activists themselves.  (After all, the activists are probably already 100% vegan, lol...)

You might reasonably say "okay, makes sense, but isn't this just a clever way for a small group of activists to LEVERAGE the power of boycotts?  the only reason the corporation is afraid of the threatened protest campaign is because they're worried consumers will stop buying their products, right?  so ultimately the activists' power is deriving from the power of the mass public to make individual personal-consumption decisions".

This might be sorta true, but I think there are some nuances:

  • i don't think the theory of change is that activists would protest and this would kick off a large formal boycott -- most people don't ever participate in boycotts, etc.  instead, I think the idea is that protests will create a vague haze of bad vibes and negative associations with a product (ie the protests will essentially be "negative advertisements"), which might push people away from buying even if they're not self-consciously boycotting.  (imagine you usually go to chipotle, but yesterday you saw a news story about protestors holding pictures of gross sad caged farmed chickens used by chipotle -- yuck!  this might tilt you towards going to a nearby mcdonalds or panda express instead that day, even though ethically it might make no sense if those companies use equally low-welfare factory-farmed chicken)
  • corporations apparently often seem much more afraid of negative PR than it seems they rationally ought to be based on how much their sales would realistically decline (ie, not much) as a result of some small protests.  this suggests that much of the power of protests is flowing through additional channels that aren't just the immediate impact on product sales
  • even if in a certain sense the cage-free activists' strategy relies on something like a consumer boycott (but less formal than a literal boycott, more like "negative advertising"), that still indicates that it's wise to pursue the leveraged activist strategy rather than the weaker strategy of just trying to be a good individual consumer and doing a ton of personal boycotts
  • in particular, a key part of the activists' power comes from their ability to single out a random corporation and focus their energies on it for a limited period of time until the company agrees to the ask.  this is the opposite of the OP's diffuse strategy of boycotting everything a little bit (they're just one individual) all the time
  • it's also powerful that the activists can threaten big action versus no-action over one specific decision the corporation can make, thus creating maximum pressure on that decision.  Contrast OP -- if Nestle cleaned up their act in one or two areas, OP would probably still be boycotting them until they also cleaned up their act in some unspecified additional number of areas.
  • We've been talking about animal welfare, which, as some other commenters have notes, has a particularly direct connection to personal consumption, so the idea of something like a boycott at least kinda makes sense, and maybe activists' power is ultimately in part derived from boycott-like mechanisms.  But there are many political issues where the connection to consumer behavior is much more tenuous and indirect.  Suppose you wanted to reduce healthcare costs in the USA -- would it make sense to try and get people to boycott certain medical procedures (but people mostly get surgeries when they need them, not just on a whim) or insurers (but for most people this comes as a fixed part of their job's benefits package)??  Similarly, if you're a YIMBY trying to get more homes built, who do you boycott?  The problem is really a policy issue of overly-restrictive zoning rules and laws like NEPA, not something you could hope to target by changing your individual consumption patterns.  This YIMBY example might seem like a joke, but OP was seriously suggesting boycotting Nestle over the issue of California water shortages, which, like NIMBYism, is really mostly a policy failure caused by weird farm-bill subsidies and messed-up water-rights laws that incentivize water waste -- how is pressure on Nestle, a european company, supposed to fix California's busted agricultural laws??  Similarly, they mention boycotting coca-cola soda because coca-cola does business in israel. How is reduced sales for the coca-cola company supposed to change the decisions of Bibi Netanyahu and his ministers?? One might as well refuse to buy Lenovo laptops or Huawei phones in an attempt to pressure Xi Jinping to stop China's ongoing nuclear-weapons buildup... surely there are more direct paths to impact here!
Load more