Hide table of contents

I have some observations and half-baked ideas about my recent donation process. They weren't important enough to include in the main post, but I want to talk about them anyway.

Cross-posted from my website.

On deference

Usually, I defer to the beliefs of other people who have spent more time on an issue than me, or who plausibly have more expertise, or who I just expect to have reasonable beliefs.

While writing my donations post, I made a conscious effort to defer less than usual. Deference might maximize the probability that I make the correct decision, but deference reduces the total amount of reasoning that's happening, which is bad for the group as a whole. I want there to be more reasoning happening.

This is most relevant in my discussion of a few orgs that I disliked, which are also very popular among big EA funders. I'm 99% confident that the big funders have private information about those orgs, so maybe I should defer to them. But I'm also maybe 75% confident that if I had access to that information, it wouldn't materially change my mind. I did anticipate that the private evidence would make me like the orgs a little better, so I updated based on this anticipation and evaluated the orgs a little more favorably than I would have otherwise.

On criticizing orgs

I am not as nice as I'd like to be. I have a habit of accidentally saying mean things that hurt people's feelings.

On the other hand, I think most people are too nice: they hurt others long-term by refusing to give them useful information that's difficult to hear.

(In theory, it's possible to never say unnecessarily mean things, and always say necessary things, but only if you have perfect communication skills. In practice, there's a tradeoff.)

I think it's a good norm that, if you're investigating an org and it opens up to you, you shouldn't take what you learn and use it against the org. I probably wouldn't criticize an org based on private information that it gave me. I did criticize some orgs, but all my criticisms were based on public information.

I think if most people wrote a donation post like mine, they'd self-censor in the interest of niceness and end up leaving out important information. I tried to avoid that, and erred more on the side of being mean (not pointlessly mean, but mean-and-truthful. Or maybe I should say mean-and-accurately-conveying-my-beliefs since I can't promise that the things I said were true).

As with my choice on deference, this was perhaps the wrong choice at an individual level but the right choice at the group level.

I did focus on criticizing organizations and avoided saying negative things about specific people whenever possible.

Donation sizing

I have a donor-advised fund (DAF) that I contributed to when I was earning to give. How much of my DAF money should I donate this year? What's a reasonable spend-down rate?

I've put a lot of thought into how quickly to spend philanthropic resources, including how AI timelines affect the answer. Unfortunately, all that thinking didn't much help me answer the question.

Plus, there are some complications:

  • I have some personal savings, which I could choose to donate. Should I count them as part of my donation money?
  • I might earn significant income in the future. Right now it looks like I won't, but I might do more earning to give at some point, or I might take a direct-work job that happens to pay well. If I expect to earn more in the future, then I should spend more of my DAF now.
  • I didn't donate much money for the last few years. Should I do catch-up donations this year? Or maybe spread out my catch-up donations over the next few years?

I didn't come up with good answers to any of these questions. Ultimately I chose how much to donate based on what felt reasonable.

Diversifying donations as a trade

I had an idea based on this comment by Oliver Habryka. He describes a trade between members of the EA community where some people do object-level work (relinquishing a high-paying job) and others earn money. He argues that when this trade occurs, the people doing object-level work should have some ownership over the funds that earners-to-give have earned.

I spent a while earning to give. So arguably I should donate money to people who started out in a similar position as me but went into direct work instead. Essentially, I should (acausally) trade with altruists who could've earned a lot of money but didn't. And because there are many such people, arguably I should split my donations across many of them instead of only donating to my #1 favorite thing.

But this argument raises some questions. Who exactly was in a "similar position as me"? What about people who aren't members of the EA community, but who are nonetheless doing similarly valuable work? What about people who didn't have the necessary skill set to earn a lot of money, so they never made a choice not to?

I decided not to further pursue this line of reasoning because I couldn't figure out how to make sense of it. I just did the obvious thing of donating to the org(s) that looked most cost-effective on the margin.

Cooperating with the Survival and Flourishing Fund

Should I donate less money to orgs that have received grants from the Survival and Flourishing Fund (SFF)?

I want to be cooperative with SFF. If I donate less to an org that's received SFF funding, that seems uncooperative.

SFF has the S-process, which is a fancy method for allocating donations from a group of value-aligned donors who each want to be the donor of last resort, but who also want to make sure their favored orgs get funded. I could cooperate with SFF by participating in this process.

I asked them if they wanted to add my money to the S-process and they declined, so I consider myself to have officially Cooperated and now I'm allowed to donate less to orgs that received SFF funding. I don't think SFF really cares if its donations trade off against mine because I have much less money than it does.

Comments2


Sorted by Click to highlight new comments since:

The SFF has a section on what they would do with more money, which you could use to cooperate with them if you wanted to. https://survivalandflourishing.fund/sff-2024-further-opportunities

Executive summary: The author reflects on their donation process, discussing the trade-offs of deferring to expert opinions, the ethics of criticizing organizations, challenges in determining donation amounts, and considerations around cooperation and diversification in effective giving.

Key points:

  1. Deference trade-offs: The author consciously deferred less to expert consensus to encourage independent reasoning, even while acknowledging that big funders may have private information influencing their decisions.
  2. Criticism norms: The author struggled with balancing honesty and kindness when criticizing organizations, ultimately prioritizing transparency over excessive politeness while avoiding personal attacks.
  3. Donation sizing challenges: The author faced uncertainties in determining how much of their donor-advised fund (DAF) to donate, considering factors like AI timelines, personal savings, and potential future earnings but ultimately made a decision based on intuition.
  4. Diversification as a trade: The idea of redistributing donations to individuals who chose direct work over high-paying jobs was explored but abandoned due to difficulty in defining fair criteria.
  5. Cooperation with SFF: The author sought to align their giving with the Survival and Flourishing Fund’s (SFF) allocation process but, after being declined participation, concluded they could adjust their donations without concern for conflicting with SFF.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma