That's great, but the less actively I'm involved in the process the more likely I am to just ignore it. That might just be me though.
This is great!! Pretty sure I'd be giving more if it felt more like a coordinated effort and less like I have to guess who needs the money this time.
I guess my only concern is: how to keep donors engaged with what's going on? It's not that I wouldn't trust the fund managers, it's more that I wouldn't trust myself to bother researching and contributing to discussions if donating became as convenient as choosing one box out of 4.
This by the way is what certificates of impact are for, although it's not a practical suggestion right now because it's only been implemented at the toy level.
The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).
Are you counting donations from people who aren't EAs, or are only relatively loosely so?
Yes. Looking at the survey data was an attempt to deal with this.
I was also hesitant about CFAR, although for a slightly different reason - around half its revenue is from workshops, which looks more like people purchasing a service than altruism as such.
Good point regarding GPP: policy work is another of those grey areas between meta and non-meta.
Not sure about 80K: their list of career changes mostly looks like earning to give and working at EA orgs - I don't see big additional classes of "direct work" being influenced. It's possible people reading the website are changing their career plans in entirely different directions, but I have my doubts.
Not sure what you mean by e.g.3.
I totally get the point regarding GWWC and future earnings, but I'm not sure how to account for it. GWWC do a plausible-looking analysis that suggests expected future donations are worth 10x total donations to date. But I'm not sure that we can "borrow from the future" in this way when doing metaness estimates, and if we do I think we'd need a much sharper future discounting function to account for exponential growth of the movement.
Good point regarding OPP: My direct charity estimate only included the top recommended charities by GW,GWWC and ACE. The OPP grants come to an additional $7.8m in 2014 ("additional" because it's not direct charities I've already considered and isn't meta either).
Anyway, taking all this into consideration I get $3.2m meta, $62m non-meta for a ratio of 5%. (Plus $2.1 million in "grey area"). So we're getting close to agreement!
Some other caveats:
Regarding the survey, do you feel that it's biased specifically towards those who prefer meta, or just those who identify as EA?
I can't emphasize the exponential growth thing enough. A look at the next page on this forum shows CEA wanting to hire another 13 people. Meanwhile GiveWell were boasting of having grown to 18 full time staff back in March; now they have 30.
But the direct charities are growing like crazy too! It all makes it very easy to be off by a factor of 2 (and maybe I am in my above reasoning) simply by using out of date figures. Anyone business-minded know about the sort of reasoning and heuristics to use under growth conditions?
I'm helping prepare a spreadsheet listing organizations and their budgets, which at some point will be turned into a pretty visualization...
Anyway, according to this sheet, meta budgets total around $4.2m (that's $2.1m GiveWell, $0.8m CEA and $0.8m CFAR, plus a bunch of little ones). That's more than "a couple", but direct charities' budgets total $52m so we're still shy of 10%.
(Main caveats to this data: It's not all for exactly the same year, so anything which is taking off exponentially will skew it. Also I haven't checked the data particularly carefully).
I've also been counting x-risk organizations as not meta. That one's a bit ambiguous - on the one hand they do a lot of "priorities research and marketing", but on the other hand there isn't really an object-level tier of organizations beneath them that works in the same areas.
As to what self-identified effective altruists are up to: a quick look at the 2014 EA survey only yields number of donations to each organization, not amount of money... but if we go with that, 20% of the donations are to organizations I've counted as "meta".
So my working conclusion would be that if you favour a 50% split across the community, you're looking good for putting all your eggs in meta. If you favour a 10-20% split, you may need to look a bit more carefully.
A final note of caution: you can only push in one direction. If you favoured a 20% meta split, and (just suppose it turned out that) only 5% of donations in your reference class went to meta, it doesn't automatically mean that you should donate to meta. There might be some other category, e.g. direct animal welfare charities, which were also under-represented according to your ideal pie. It's then up to you to decide which needs increasing more urgently.
Multiple donors could form coalitions to fund a single donee
Or to fund multiple donees.
Let me know if you're expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.
A Mindful Approach to Tackling those Yucky Tasks You’ve Been Putting Off
For many of us, procrastination is a problem. This can take many forms, but we’ll focus on relatively simple tasks that you’ve been putting off long-term.
Epistemic status: speculative, n=1 stuff.
Yucky tasks may be thought of several ways:
The connection to EA?
EA is not about following well-trodden paths. We’re all trying to do something different and new, and stepping out of comfort zones.
For some of us, we may be exceptionally talented or productive in some domains, but find some of the tasks elusive or hard to get a grip on.
So what happens?
Most commonly avoidance. This can go on until there’s some kind of shift: maybe we avoid something until it becomes super urgent, or maybe we just wait until our feelings around it become clearer.
Forcing ourselves to jump right in, tackling the task “forcefully” using all our available willpower. Though this can get the job done it can be unpleasant and unsustainable - we’ll remember all that negativity for the next time, and thus make the next task more difficult. Especially disruptive when working with others.
What’s an alternative?
This talk is about discovering and mapping our mental landscapes surrounding a problem. Tasks, and their associated thoughts and emotions, can be mapped out in a rich web. Often, different sub-tasks will be associated with different emotions, and seeing this laid out can help with getting our emotional bearings, as well as practical problem-solving.
The result is unpacking a complex, muddied anxiety or resentment into something cleaner and truer. We’re still at early stages but we’re hoping to build this technique out into something robust that can help those of us in the EA movement overcome the blocks to personal effectiveness.
(I would like to be part of the late session)