# Recent Discussion

(This post arose out of confusion I had when considering "neutrality against making happy people". Consider this an initial foray into exploring suffering-happiness tradeoffs, which is not my background; I'd gladly welcome pointers to related work if it sounds like this direction has already been considered.)

There are two approaches for how to trade-off suffering and happiness which don't sit right with me when considering astronomical scenarios[1].

1. Linear Tolerance: Set some (possibly large) constant . Then amount of suffering is offset by amount of happiness so long as .

My impression is that Linear Tolerance is pretty common among EAers (and please correct this if I'm wrong). For example, this is my understanding of most usages of "net benefit", "net positive", and so on: it's a linear tolerance of [2]. This seems ok for the quantities of suffering/happiness we encounter in the present and near-future, but in my opinion becomes unpalatable...

Hey Forum,

Some really talented people and I have recently been working on a nonprofit in Denmark. I have been documenting our process so far, and I invite you to check out the latest installment.

In this episode, I share how we went from 1 to 4 Co-founders.

## If you Just want the take-aways, then here you go:

If I had to distill what I have learned since the last time, then my (very brief) experience is that it is surprisingly easy to rally people to work for a common cause.

I have tried to find co-founders in the past for commercial/for-profit projects and the difference is staggering.

This was also apparent in the response to my posts on Reddit. Immediately after writing there, people have reached out asking how they can help out, how they can get involved, and what they can help with. If I had known I would get such a response I would have been better prepared, because the truth is I didn’t know what I needed help with so far.

I'm in the very early stages of starting a nonprofit too and looking forward to reading more about your experience!

In October I wrote a post about an idea I was trying to get off the ground: a platform that would collect pledges from donors to opposing political candidates and then send matching amounts from both sides to effective charities. This post is a status update on our efforts and a summary of our main challenges.

• Our team now has three people: Anand Shah (undergraduate at UChicago), Yash Upadhyay (undergraduate at UPenn; Y Combinator Summer ‘19), and Eric Neyman (graduate student at Columbia; that’s me!). We’ve decided to call ourselves “Pact”.
• We’ve spoken to several lawyers and have gotten useful information about how it makes sense to structure the pledges/donations. However these issues are far from resolved.
• We’ve also spoken to a few campaign consultants. (We were particularly interested in speaking to Republicans, since our intuition -- which the Republican consultants we talked to agreed with -- was that it would be more
...
2UnexpectedValues4hYeah, there are various incentives issues like this one that are definitely worth thinking about! I wrote about some of them in this blog post: https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/ [https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/] The issue you point out can be mostly resolved by saying that half of a pledges contributions will go to their chosen candidate no matter what -- but this has the unfortunate effect of decreasing the amount of money that gets sent to charity. My guess is that it's not worth it (though maybe doing some nominal amount like 5% is worth it (so as to discourage e.g. liberals who care mostly just about charity from donating to the Republican candidate).
1tylermaule10hI see now that this and a couple other points were mentioned in Repledge++ [https://sideways-view.com/2016/10/31/repledge/]. One more I would add to the list: 'Relative advantage' in cash vs percentage terms could be a sticking point. In the case of a $10M/$8M split, giving $2M/$0 to the respective candidates seems unfair to candidate B, because $2M is infinitely more than$0 in percentage terms. Say this money was going to ad buys, instead of running 100 vs 80 ad spots, candidate A now runs 20 spots vs zero for candidate B, and is the only candidate on the airwaves. I would argue that a fair split would be $1.111M vs$0.889M, but I'm not sure that supporters of candidate A would agree. Of course, if you assume that the platform is only a tiny fraction of total campaign contributions this is much less significant, but still worth a thought.

Yeah -- I think it's unlikely that Pact would become a really large player and have distortionary effects. If that happens, we'll solve that problem when we get there :)

The broader point that the marginal dollar might be more valuable to one campaign than to another is an important one. You could try to deal with this by making an actual market, where the ratio at which people trade campaign dollars isn't fixed at 1, but I think that will complicate the platform and end up doing more harm than good.

A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.

1.  It wasn't particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
2. It wasn't clear if payroll giving was an option
3. He found it hard to find GiveWell's spreadsheet of effectiveness

Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.

From Charles Kenny, of the Center for Global Development, comes a book I think might be a great introduction to some EA-ish topics for kids in the ~10-16 age range.

As noted below, the book is free to download, but you can also buy a hard copy.

The last eighteen months have shattered any pretense that global development can be taken as given. As ‘impatient optimist’ Bill Gates declared “The COVID-19 pandemic has not only stopped progress — it's pushed it backwards.” Beyond health, the COVID-19 crisis increased global poverty as well as national level inequality and cut into education.

But it is a sign of the rapid progress that the world was making that even after this catastrophic shock, the estimates are that the number of people living at the \$1.90/day poverty line in 2020 may rise back to the level of only five years prior. Over the longer term, for all

...

Good point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good - even if it's not immediately effective.

1Ramiro6hGood point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good - even if it's not immediately effective.

Note: I'm crossposting this because I often find myself referring to it in conversation. I often hear ideas or proposals that fail to account for the surprising amount of detail reality contains. Some of my own ideas and proposals fail for the same reason.

While the stereotype of people in EA as head-in-the-clouds philosophers doesn't fit most people I've met in the movement, we should still recognize those tendencies in ourselves when they arise, and remember how small details can multiply or reverse the impact we expect to have.

(Put another way, I think of this post as a clear introduction to crucial considerations.)

My favorite lines:

You might hope that these surprising details are irrelevant to your mission, but not so. Some of them will end up being key.

[...]

You might also hope that the important details will be obvious when you run into them, but not so. Such details aren’t automatically visible, even

...

We've all failed at times. It seems like the stakes are especially high for us as EAs because we're trying to make a difference in the world, and failure means not having as much impact as we could. At the same time, EA sometimes entails doing high-risk, high-value projects that have a 99% chance of having no impact but a 1% chance of making a huge difference. I'm curious to hear about your experiences with failure, how you've dealt with failure, and your suggestions for how EAs can deal with the possibility of failure.

5Ben Pace9hWhat Failure Looks Like [https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like], hit LW post. (Note: does not at all answer your question.)

Yup ;)

2Answer by MichaelA13hI think one aspect of how to deal with the possibility of failure is how to deal with the possibility of accidental harm / downside risk - i.e., the possibility that an action would make the world worse in some ways (which may or may not outweigh the positive effects). Here [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=GLJdSeFpLQhg6p8cK] is a collection of sources on that topic which people might find useful. (But this is of course not a complete answer to your question, since the possibility of failure is not always about downside risk. It can also be about actions turning out to be more "expensive" (e.g., time-consuming) than one would like, actions turning out to achieve their intended objectives to a lesser extent than one expected/hoped, or other actions turning out to have probably been better choices than the action one took.)

## tl;dr:

I'm interested in questions of resource allocation within the community of people trying seriously to do good with their resources. The cleanest case is donor behaviour, and I'll focus on that, but I think it's relevant for thinking about other resources too (saliently, allocation of talent between projects, or for people considering whether to earn to give). I'm particularly interested to identify allocation/coordination mechanisms which result in globally good outcomes.

## Starting point: "just give to good things" view

I think from a kind of commonsense perspective, just trying to find things that are doing good things and giving them resources is a reasonable strategy. You should probably be a bit responsive to how desperately they seem to need money.

The ideas of "effective altruism" might change your conception of what "doing good things" means (e.g. perhaps you now assess this in terms...

4MichaelA14hHmm, I don't think this seems quite right to me. I think I've basically never thought about moral uncertainty or epistemic humility when buying bread or getting a haircut, and I think that that's been fine. And I think in writing this post you're partly in the business of trying to resolve things like "donors of last resort" issues, and that that's one of the sorts of situations where explicitly remembering the ideas of moral uncertainty and epistemic humility is especially useful, and where explicitly remembering those ideas is one of the most useful things one can do. This seems right to me, but I don't think this really pushes against my suggestion much. I say this because I think the goals here relate to fixing certain problems, like "donors of last resort" issues, rather than thinking of what side dishes go best with (implicit or explicit) impact markets. So I think what matters is just how much value would be added by reminding people about moral uncertainty and epistemic humility when trying to help resolve those problems - even if implicit impact markets would make those reminders less helpful, I still think they'd be among the top 3-10 most helpful things. (I don't think I'd say this if we were talking about actual, explicit impact markets; I'm just saying it in relation to implicit impact markets without infrastructure.)

I guess I significantly agree with all of the above, and I do think it would have been reasonable for me to mention these considerations.  But since I think the considerations tend to blunt rather than solve the issues, and since I think the audience for my post will mostly be well aware of these considerations,  it still feels fine to me to have omitted mention of them? (I mean, I'm glad that they've come up in the comments.)

I guess I'm unsure whether there's an interesting disagreement here.

We hereby announce a new meta-EA institution - "Naming What We Can".

# Vision

We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects.

# Goal

To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis

# Mission

Using our superior humor and language articulation prowess, we will come up with names for stuff.