Before we had kids, Jeff and I fostered a couple of cats. One had feline AIDS and was very skinny. Despite our frugal grocery budget of the time, I put olive oil on her food, determined to get her healthier. I knew that stray cats were not a top global priority, and that this wasn’t even the best way of helping stray cats, but it was what I wanted to do.

. . . . .

The bike path near where I live has a lot of broken glass on the ground nearby. My family likes to go barefoot in the summer, and a lot of people walk their dogs there. Last summer I started bringing a container when we went out and cleaning a patch of ground each time. Picking up glass gave me something a little goal-oriented to do while the kids were playing. The kids got excited about spotting pieces of glass and pointing them out to me. Neighbors would stop and join me for a while.

. . . . .

I don’t want to hold these up as an example of impact. They’re not, or at least not examples of any important impact. I think there are way too many narratives encouraging people to practice small acts of kindness that produce equally small benefits. Women especially may be encouraged to see their life’s impact as resting on their service to friends, family, and local community.

That’s why I felt kind of worried to find myself engaging in these small acts. I want people to look at the big picture and aim high. If you’ve been taught that “doing your part” meant recycling and a bit of volunteering, you’ll need to find something more ambitious if you want to make a bigger difference.

But it can be painful to stare at the scale of the world’s problems, and I don’t recommend doing it all the time. Not every part of your life will be optimized for maximum altruistic impact.

Some of those small acts can be pretty satisfying. Humans do best when we’re in connection with other humans. And we feel mastery when we have small goals that we can meet. Doing your best for a stray cat, bringing the snacks to a game night, going to a rally, or helping a neighbor restart their car are achievable in a way that “reduce the risk of nuclear war” is not. They also strengthen your relationships with those around you.

(One year when my coworkers and I were preparing for the EA Global conference, one of our speakers went for a walk in Oakland and was gone for a surprisingly long time. It turned out someone had flagged him down and asked him to help move her furniture. He said it was refreshing to spend half an hour doing something so obviously not the best use of his time.)

As Gregory Lewis argues, it’s unlikely that any one action is going to be optimal for all your goals. The food that’s tastiest is unlikely to also be the most nutritious and also the most ethically produced. So you might need to make some tradeoffs, and acknowledge that both chocolate and dark leafy greens are good, but not for the same things.

Prioritize big problems. Spend a good chunk of your money and/or your time working on them.

But in your other time, do what’s refreshing and restorative to you. Some of that will purely hedonic — sleeping in, music, cake. And some might be small acts of kindness that make your day brighter, even though they’re not saving the world.

105

Mentioned in
18 comments, sorted by Click to highlight new comments since: Today at 5:00 PM
New Comment

I can vouch for the value of this approach. My apartment complex has an informal back path (paved by generations of feet) that people use to get to the nearby university. It also gets some occasional use from the local unhoused population. Over time, lots of litter had built up in a certain patch (you could hardly see the ground).

I noticed that I felt a sense of annoyance every time I walked through the tiny valley of trash (not annoyance at humans, but at the interruption in what was otherwise a nice miniature nature walk). So one day I bought some surgical gloves and trash bags, put on a podcast, and cleaned the path. It took less than two hours to remove 98% of the litter (the rest being things like bottle caps that would have been laborious to track down and collect). 

The result: I got a clean path, a few hundred other people got a clean path, and I got to think of myself as "the kind of person who cleans up the commons," which was more personally satisfying than any donation I made that year (because I am irrational).

(because I am irrational).

Keen to know how this follows. Do you mean people who are not EA are irrational? P.s. hope I'm not nitpicking

I was using very casual language here, and there might be a better word than "irrational".

The complex concept I was casually representing: "It seems good to be someone who feels more satisfaction when they do more good for more people. This isn't how my own feelings of satisfaction work, which makes me less motivated to do more good for more people than I wish I were."

"Irrational" refers to the desire to feel a different way than I actually feel, with a hint of "this is especially awkward because I've had plenty of time to reflect on these feelings and try to change them". Maybe "unreasonable" is a better word, or even "imperfect".

Thank you this makes sense.

I somehow feel like I understand why humans tend to do this, I'll write it up one-day and let you know!

There are many reasons that humans tend to do this, and I'm very familiar with them! I wrote part of my thesis on this topic.

Nevertheless, my feelings remain. The problem isn't ignorance, but (the concept I was trying to represent with "irrationality").

I read your post - and decided to write up my thoughts anyway. It might be a weird take, but I would really appreciate your opinion on it, if you have the time. I've spent way too much time unsure about the best way to explain it, yet felt the need to explain it in so many places, so it would mean a lot to me if you read it. It also describes why I'm kinda skeptical of AI alignment being solveable.


I just found this from your post. 

I’m not going to say “System One” and “System Two”, because that’s cliche, so instead I’ll say “warm giving” and “cool giving” to reflect the fact that giving is driven by a mix of “cool” motivations (an abstract desire to do good, careful calculation of your impact, strategic giving that will make people like you) and “warm” motivations (empathy toward the recipient, personal connections to the charity, a habit of giving a dollar to anyone who asks).

I somehow feel like System 2 has no genuine desires of its own, it simply borrows them from System 1.

System 1 = desires + primitive tools to trade-off different desires

These tools aren't super advanced, they can't do math or formal logic and are mostly heuristics, they often throw up random answers if situations don't cleanly fit into exactly one heuristic, and decisions guided purely by System 1 will lose pennies.

System 1 is also super-inflexible - you can't simply choose to rewire all of your System 1, this is beyond the reach of free will. (Maybe neurosurgery can change it.)

System 2 = advanced reasoning tools

System 2 just borrows desires from System 1. You won't do cold-hearted calculation on what saves the most lives unless you've already had System-1 experiences of other people's pain or joy, and System-1 desire to help people.

Problem with System 2 is, no matter how much math or logic it throws at the problem, it can't find a consistent way of trading-off different desires that is also consistent with System-1. Why - because System-1 was never coherent in the first place.

It also can't choose to just ignore System-1 and formulate its all-important theory of ethics because it has no desires (/values/ethics) of its own, nor any objective way to compare them. Ground truth on such matters comes from System-1.

(I see ethics as a subset of desires btw, I don't think we should assume something fundamentally different between the desire to eat sugar and the desire to protect a friend.)

Evolutionarily this could be because System-1 is a bunch of assorted hacks evolved over millions of years in different ways, whereas System-2 is one single evolutionary change between apes and humans, and hence a lot cleaner.


This is also why I'm not convinced CEV exists or that AI alignment is solveable in any meaningful sense.

I'd love to know why I'm wrong or what you feel about this.

I don't have much time to respond here and haven't thought much about my thesis since I wrote it almost seven years ago (and would probably find much of it embarrassing now in the light of the replication crisis + my better grasp on philosophy). A few notes:

  • I think that humans do something akin to CEV as part of our daily lives — we experience impulses, then rein them back ("take a deep breath", "think of what X would say", "put yourself in their shoes"...) It seems like we're usually happier with our choices when we've taken more time to think them over (though this isn't always true — people do argue themselves out of doing what they later believe they should have done). I've always pictured CEV as an extremely advanced version of this, but one that we would still recognize as descending from that original process.
  • I also see ethics as a subset of desires. 
  • I don't think "System 1" and "System 2" are real things in a sense that would make "System 2 has no desires" a sensible statement. Humans do a lot of things with our minds, those things have properties, and we try to split them up based on those properties — but this doesn't mean that the resulting clusters have to refer to real things. Even when I'm doing my deepest "EA thinking", there's always  a thread of "why" I can pull that leads back to something I care about "just because", and it seems like the same thing would be true for every other mental process. (And by "a thread", I mean "multiple threads", and sometimes I'll look at where all the threads end and wind up needing to make an arbitrary about which "just because" thing to prioritize.)

Thank you for replying!

I don't think "System 1" and "System 2" are real things in a sense that would make "System 2 has no desires" a sensible statement.

Fair enough. I guess I meant our desires have evolved into our neural circuitry as part of System 1. And we can't use a lot of thinking alone (System 2 activation) to decide what our goals, we need to first have first-hand experiences of pleasure or pain.

I think that humans do something akin to CEV as part of our daily lives

You're right humans do end up doing this decision + retroactive decision, I just don't think that always leads to one consistent place, different humans can justify different things to themselves (or even the same human at different points in time). There are a bunch of different things that System 1 strongly reacts to (our "core values"), and I don't think our brains naturally have way of trading them off against each other that doesn't lose pennies. System-2 tries its best to ignore these inconsistencies, but then where it ends up is random, because there's no real way to decide what to ignore.

 We don't often encounter such situations where we're forced to make such trades, but we can in theory, as in trolley problems.

Edit: deleted "prisoner dilemma" that was by mistake

Oh okay, nice to know. I will check it out.

Although I agree with the message that fuzzy-feely  altruism can benefit your own well-being and motivation to do high impact altruism, I cringed a bit at the title. Please consider feeding only sterilised stray cats. Thanks ^^'

Otherwise, by feeding a fertile stray cat you contribute to creation of more starving, diseased cats. So even for the fuzzy feely altruism it is important to not just see what you want to see (the one moment of a pleased stray cat you interacted with), but assess relevant (future) consequences to not cause more suffering. 

Actually, sterilising would be the more valuable doing than feeding, but that again may not be a fuzzy feely altruism anymore, as the immediate reaction of the cat certainly will not be gratefulness. And we usually need (immediate) gratefulness of other beings to create the desired improved mental well-being for ourselves and motivation. 

There is also the problem of the detrimental effect of a cat population on birds.

https://www.nature.com/articles/ncomms2380

We estimate that free-ranging domestic cats kill 1.3–4.0 billion birds and 6.3–22.3 billion mammals annually. Un-owned cats, as opposed to owned pets, cause the majority of this mortality.

To be clear, this was an indoor foster cat, formerly a stray.

Thanks for this post!  I do sometimes think helping others in ways that are "inconsequential on big-picture scales" is a good way to remember why we bother with altruism. Life can be beautiful, humans can do nice things, occasionally a defenceless animal gets rescued from terrible suffering (and gets neutered of course haha) etc. If life was just about producing endless generations of nothing but joyless and hopeless struggle for most living beings, what would be the point in our efforts? 

How much should conflicting desires to be locally kind and globally good affect our choices about living in EA bubbles, where our locally kind choices might multiply the effectiveness of effective people? I had previously felt it was a strong reason to live in an EA bubble, but perhaps this was due to stupid reasons.

Those stupid reasons: In my previous non-EA group living arrangement, I felt frustrated by the conflict between being locally helpful and globally effective. But then when I got to the EA Hotel, I felt this conflict was resolved yet still wasn't very locally kind or helpful, so maybe the salience of this conflict only ever existed as a justification for being lazy.

I'm curious to know how other people have experienced the transition to and from EA bubbles with respect to this tension.

If you think a community has a "local kindness gap" that you can fill, and that gap seems to be reducing how well that community is doing at achieving its goals, it's reasonable to think that being a kind person in that community will end up doing more good than you'd expect to do if you were being kind in a random other community.

That said, there are also downsides to strengthening bubbles, and I'd expect (quick thoughts, haven't pondered this much) that a "locally kind person with EA inclinations" would be most effective in place that has a small/new EA community, where the marginal value of extra (dinner hosting/event organizing/grabbing coffee with new arrivals) seems higher than in a place where there are already lots of events and chances for new folks to get involved.

I loved reading this. Thank you.

Thank you for writing this. I can relate well to the refreshing and restorative effect of small acts of kindness.

I think there are way too many narratives encouraging people to practice small acts of kindness that produce equally small benefits.

Thanks for helping me notice that I have one of those narratives floating around in my head without being questioned. Questioning it right now feels kind of sad, I really liked the idea that my small acts of considerateness will maybe potentially some day turn out to have been very important for the future of everything.