That's great, but the less actively I'm involved in the process the more likely I am to just ignore it. That might just be me though.
This is great!! Pretty sure I'd be giving more if it felt more like a coordinated effort and less like I have to guess who needs the money this time.
I guess my only concern is: how to keep donors engaged with what's going on? It's not that I wouldn't trust the fund managers, it's more that I wouldn't trust myself to bother researching and contributing to discussions if donating became as convenient as choosing one box out of 4.
This by the way is what certificates of impact are for, although it's not a practical suggestion right now because it's only been implemented at the toy level.
The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).
Are you counting donations from people who aren't EAs, or are only relatively loosely so?
Yes. Looking at the survey data was an attempt to deal with this.
I was also hesitant about CFAR, although for a slightly different reason - around half its revenue is from workshops, which looks more like people purchasing a service than altruism as such.
Good point regarding GPP: policy work is another of those grey areas between meta and non-meta.
Not sure about 80K: their list of career changes mostly looks like earning to give and working at EA orgs - I don't see big additional classes of "direct work" being influenced. It's possible people reading the website are changing their career plans in entirely diff...
I can't emphasize the exponential growth thing enough. A look at the next page on this forum shows CEA wanting to hire another 13 people. Meanwhile GiveWell were boasting of having grown to 18 full time staff back in March; now they have 30.
But the direct charities are growing like crazy too! It all makes it very easy to be off by a factor of 2 (and maybe I am in my above reasoning) simply by using out of date figures. Anyone business-minded know about the sort of reasoning and heuristics to use under growth conditions?
I'm helping prepare a spreadsheet listing organizations and their budgets, which at some point will be turned into a pretty visualization...
Anyway, according to this sheet, meta budgets total around $4.2m (that's $2.1m GiveWell, $0.8m CEA and $0.8m CFAR, plus a bunch of little ones). That's more than "a couple", but direct charities' budgets total $52m so we're still shy of 10%.
(Main caveats to this data: It's not all for exactly the same year, so anything which is taking off exponentially will skew it. Also I haven't checked the data particularl...
Let me know if you're expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.
I'm guessing that for these to work, the ownership of certificates should end up reflecting who actually had what impact. I can think of two cases where that might not be so.
Regret swapping:
So person A ends up owning a certificate for Y, and person B ends up owning a certificate for X, even though neither of them can really be said to have "caused" that particular impact....
I've just found out that Paul Christiano and Katja Grace are already buying certificates of impact.
Just one comment: the essay asks "Why doesn’t the Gates foundation just close the funding gap of AMF and SCI?" but doesn't seem to offer an answer. The closest seems to be 3b/c which suggests it's a coordination problem or donor's dilemma: everyone is expecting everyone else to fund these organizations.
If that's the case, the relevant question would seem to be: what does the Gates foundation want? If the EA community finds something that GF wants that we can potentially offer (such as new high-risk high-return charities doing something totally innovative), then we can potentially do a moral trade with them.
Oh one other thing - I think the trickiest part of this system will be verifying whether someone has actually donated to a charity at the time they said they did. Every charity does it a different way.
I'm interested in moving moral economics forward in a different way: by creating some kind of online "moral market" and seeing what your happens.
There are two possible systems I could implement:
I'll describe the points-based system here, as it's the one I've thought through a bit more. I presume it theoretically diverges from a certificate of impact system, but I haven't thought through exactly how.
Users have points. The total number of poi...
I'm a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): "they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare."
It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can't necessarily assume many people in EA agree with me though.
There's also some confusion...
There's another response that EAs could have to the priority/ultrapoverty strand, which is to bend their utility functions so that ultrapoverty is rated as even more bad, and improvements at the ultrapoverty end would be calculated as more important. Of course, however concave the utility function is, you can still construct a scenario where the people at the ultrapoverty end would be ignored.
I think that the priority/ultrapoverty strand of this argument is one place where you can't ignore nonhuman animals. My intuition says that they're among the worst off, and relatively cheap to help.
My first thought on reading the "Two villages" thought experiment was that the village that was easier to help would be poorer, because of the decreasing marginal value of money. If this was so, you'd want to give all your money to the poorer one if your goal was to reduce "the influence of morally arbitrary factors on people's lives".
On the other hand that gets reversed if the poorer village is the one that's harder to help. In that case fairness arguments would still seem to favour putting all your money in one village, just the opposite one to what consequentialists would favour. (So that this problem can't be completely separated from the Ultrapoverty one).
One thing I find interesting about all the thought experiments is that they assume a one donor, many recipient model. That is, the morality of each situation is analyzed as if a single agent is making the decision.
Reality is many donors, many recipients and I think this affects the analysis of the examples. Firstly because donors influence each others' behaviour, and secondly because moral goods may aggregate on the donor end even if they don't aggregate on the recipient end. I'll try and explain with some examples:
Two villages (a): each village currently ...
I have a minor philosophical nitpick.
No sane person would say, “Well, the risk of a nuclear meltdown at this reactor is only 1 in 1000
There are (checks Wikipedia) 400ish nuclear reactors, which means if everyone followed this reasoning, the risk of a nuclear meltdown would be pretty high.
Existential risks with low probabilities don't add up in the same way. It's my belief that the magnitude of a risk equals the badness times the probability (which for xrisk comes out to very, very bad) but not everyone might agree with me, and I'm not sure the nuclear reactor example would convince them.
some of the Gates Foundation work is higher impact than GiveWell top charities
Hasn't GiveWell also said that large orgs tend to do so many different things that some end up being effective and others not? Does this criticism apply to the Gates Foundation?
I've got 16 people on the list and nominally made 5 pairings. In a while I'll prod people to see if they're actually talking to each other.
I think you're imagining a scenario where every organization either:
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn't feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful s...
What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism
I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don't. Currently the thing you have to know is "there's this thing called EA and earning to give". As that meme spreads, you'd expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.
The number of earning-to-givers * average good done by earning to g...
Hi Anonymous,
Really sorry to hear that you feel like that. I'm glad you find writing about it therapeutic. One thing you can try - it's worked for me - is to write down a "toolbox" of things (such as writing) that allow you to feel better about yourself when you're feeling bad.
This could even include taking 1-2 hours to criticize yourself - if that's what works for you. But having other options might help. Writing them down somewhere visible can help too.
The reason I'm bringing this up is that - for me at least - the mindframe you describe isn't ...
I was reading The Phatic and the Antic-Inductive on Slate Star Codex.
Why's this relevant?
Birthday and Christmas charity fundraisers of course!
There is a sense in which the concept of a birthday fundraiser is anti-inductive - if they worked, and everyone realised they worked, then a lot more people would be doing them and they wouldn't work so well any more.
But actually running a fundraiser feels more like phatic communication. You're really communicating very little information about the charity you want people to give money to, but people seem to apprecia...
Yes - I clicked on "save and continue" and what I got was "submit". I'd better get back to work on it, I guess!
I'd suggest Global Catastrophic Risks as a good primer. (The essays aren't written by Bostrom; he co-edited the book)
I was googling "effective altruism arrogant" and it turned up a few links which I'm posting here so I don't lose them:
Thanks - I knew they were involved in the EA Summit but I didn't know they were the sole organizers. I also knew they weren't soliciting donations. I partially retract my earlier statement about them! (Also I hope I didn't cause anyone any offense - I've met them and they're super super nice and hardworking too)
Thanks - most of those names ring a bell but the Selfish Gene is the only one I've read. I guess some of the value of reading them is gone for me now that my mind is already changed? But I'll keep them in mind :-)
I don't know if this is relevant to the criticism theme, but I found it was necessary for me to take some of Hanson's ideas seriously before becoming involved in EA, but his insistence on calling everything hypocrisy was a turn-off for me. Are there any resources on how we evolved to be such-and-such a way (interested in self+immediate family, signalling etc.) but that that's actually a good thing because once we know that we can do better?
However, I haven't seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism.
Would they do it if we paid them?
Definitely. Some of the team at least are EA insiders and lurking on this very forum, and they'll already know about TLYCS for sure.
Another criticsm: the movement isn't as transparent as you might expect. (Remember, GiveWell was originally the Clear Fund - started up not necessarily because existing charitable foundations were doing the wrong thing, but because they were too secretive).
When compiling this table of orgs' budgets, I found that even simple financial information was difficult to obtain from organizations' websites. I realise I can just ask them - and I will - but I'm thinking about the underlying attitude. (As always, I may be being unfair).
Also, what Leverage Research are...
"Giles has passed on some thoughts from a friend" is one of the things cited, so if a particular criticism isn't listed we can assume it's because Ryan doesn't know about it, not that it's inherently too low status or something. I definitely want to hear what your friends have to say!
Great idea!
Does the pamphleting have to be done on Fridays, or can it be done on pseudo-random days? (I'm thinking about distinguishing the signal from the pamphlets from e.g. people spending more time on the Internet during weekends. Pseudo-random spikes might require fancier math to pick out though, and of course you need to remember which days you handed out pamphlets!
Can you ask people, when they take the pledge, how they found out about TLYCS? (This will provide an under-estimate, but it can be used to sanity-check other estimates). (Also it's a bit a...
Here's the link to the Facebook group post in case people add criticisms there.
Glad you linked to Holden Karnofski's MIRI post. Other possibly relevant posts from the GiveWell blog:
Why we can't take expected value estimates literally even when they're unbiased (I remember this causing a stir on LW)
There are more on a similar philosophical slant (search for "explicit expected value") but the above seem the most criticismy.
Great topic!
I think you missed this one from Rhys Southan which is lukewarm about EA: Art is a waste of time says EA
I don't see the Schambra piece as particularly vitriolic.
I don't know where to find good outside critics, but I think there's still value in internal criticism, as well as doing a good job processing the criticism we have. (I was thinking of creating a wiki page for it, but haven't got around to it yet).
Some self-centered internal criticism; I don't know how much this resonates with other people:
Is it working now? I wondered why I wasn't getting more karma ;-)
Is anybody else having problems with the image upload feature of the forum?
there's going to be some optimal level of abstraction
I'm curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
http://effective-altruism.com/ea/b2/open_thread_5/1fe
Also, I know that I'd really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
As a separate point, I'm not sure what % of unrestricted donations to GiveWell go to its own operations as opposed to being granted to its recommended charities.
A Mindful Approach to Tackling those Yucky Tasks You’ve Been Putting Off
For many of us, procrastination is a problem. This can take many forms, but we’ll focus on relatively simple tasks that you’ve been putting off long-term.
Epistemic status: speculative, n=1 stuff.
Yucky Tasks
Yucky tasks may be thought of several ways:
The connection to EA?
EA i... (read more)