(This post arose out of confusion I had when considering "neutrality against making happy people". Consider this an initial foray into exploring suffering-happiness tradeoffs, which is not my background; I'd gladly welcome pointers to related work if it sounds like this direction has already been considered.)
There are two approaches for how to trade-off suffering and happiness which don't sit right with me when considering astronomical scenarios.
My impression is that Linear Tolerance is pretty common among EAers (and please correct this if I'm wrong). For example, this is my understanding of most usages of "net benefit", "net positive", and so on: it's a linear tolerance of . This seems ok for the quantities of suffering/happiness we encounter in the present and near-future, but in my opinion becomes unpalatable...
Some really talented people and I have recently been working on a nonprofit in Denmark. I have been documenting our process so far, and I invite you to check out the latest installment.
In this episode, I share how we went from 1 to 4 Co-founders.
If I had to distill what I have learned since the last time, then my (very brief) experience is that it is surprisingly easy to rally people to work for a common cause.
I have tried to find co-founders in the past for commercial/for-profit projects and the difference is staggering.
This was also apparent in the response to my posts on Reddit. Immediately after writing there, people have reached out asking how they can help out, how they can get involved, and what they can help with. If I had known I would get such a response I would have been better prepared, because the truth is I didn’t know what I needed help with so far.
Thanks for reading! :-)
In October I wrote a post about an idea I was trying to get off the ground: a platform that would collect pledges from donors to opposing political candidates and then send matching amounts from both sides to effective charities. This post is a status update on our efforts and a summary of our main challenges.
From Charles Kenny, of the Center for Global Development, comes a book I think might be a great introduction to some EA-ish topics for kids in the ~10-16 age range.
The last eighteen months have shattered any pretense that global development can be taken as given. As ‘impatient optimist’ Bill Gates declared “The COVID-19 pandemic has not only stopped progress — it's pushed it backwards.” Beyond health, the COVID-19 crisis increased global poverty as well as national level inequality and cut into education.
But it is a sign of the rapid progress that the world was making that even after this catastrophic shock, the estimates are that the number of people living at the $1.90/day poverty line in 2020 may rise back to the level of only five years prior. Over the longer term, for all
Note: I'm crossposting this because I often find myself referring to it in conversation. I often hear ideas or proposals that fail to account for the surprising amount of detail reality contains. Some of my own ideas and proposals fail for the same reason.
While the stereotype of people in EA as head-in-the-clouds philosophers doesn't fit most people I've met in the movement, we should still recognize those tendencies in ourselves when they arise, and remember how small details can multiply or reverse the impact we expect to have.
(Put another way, I think of this post as a clear introduction to crucial considerations.)
My favorite lines:
You might hope that these surprising details are irrelevant to your mission, but not so. Some of them will end up being key.
You might also hope that the important details will be obvious when you run into them, but not so. Such details aren’t automatically visible, even
We've all failed at times. It seems like the stakes are especially high for us as EAs because we're trying to make a difference in the world, and failure means not having as much impact as we could. At the same time, EA sometimes entails doing high-risk, high-value projects that have a 99% chance of having no impact but a 1% chance of making a huge difference. I'm curious to hear about your experiences with failure, how you've dealt with failure, and your suggestions for how EAs can deal with the possibility of failure.
I'm interested in questions of resource allocation within the community of people trying seriously to do good with their resources. The cleanest case is donor behaviour, and I'll focus on that, but I think it's relevant for thinking about other resources too (saliently, allocation of talent between projects, or for people considering whether to earn to give). I'm particularly interested to identify allocation/coordination mechanisms which result in globally good outcomes.
I think from a kind of commonsense perspective, just trying to find things that are doing good things and giving them resources is a reasonable strategy. You should probably be a bit responsive to how desperately they seem to need money.
The ideas of "effective altruism" might change your conception of what "doing good things" means (e.g. perhaps you now assess this in terms...
We hereby announce a new meta-EA institution - "Naming What We Can".
We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects.
To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis
Using our superior humor and language articulation prowess, we will come up with names for stuff.
We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org’s title - we Name What We Can.
We're here for...