I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.
I don't have the estimates for how the multiplier changes over time, though you would expect a decline, driven by the future pledging pool being less EA/zealous than earlier batches.
For the value of a *pledge* - based on analysis of the available data, it doesn't appear that donations increase over time (for any given pledge batch), so after relevant temporal discounts (inflation etc), the value of a pledge is relatively front-loaded:
Hi Nuno,
We report a crude version of uncertainty intervals at the end of the report (pg 28) - taking the lower bound estimates of all the important variables, the multiplier would be 0x, while taking the upper bound estimates, it would be 100x.
In terms of miscellaneous adjustments, we made an attempt to be comprehensive; for example, we adjust for (a) expected prioritization of pledges over donations by GWWC in the future, (b) company pledgers, (c) post-retirement donations, (d) spillover effects on non-pledge donations, (e) indirect impact on the EG ecosystem (EG incubation, EGsummit), (f) impact on the talent pipeline, (g) decline in the counterfactual due to the growth of EA (i.e. more people are likely to hear of effective giving regardless of GWWC), and (h) reduced political donations. The challenge is that a lot of these variables lack the necessary data for quantification, and of course, there may be additional important considerations we've not factored in.
That said, I'm not sure if we would get a meaningful negative effect from people being less able to do ambitious things because of fewer savings - partly for effect size reasons (10% isn't much), and also you would theoretically have people motivated by E2G to do very ambitious for-profit stuff when they otherwise would have done something less impactful but more subjectively fulfilling (e.g. traditional nonprofit roles). It does feel like a just-so story either way, so I'm not certain if the best model would include such an adjustment in the absence of good data.
https://docs.google.com/spreadsheets/d/1MF9bAdISMOMV_aOok9LMyKbxDEpOsvZ9VO8AfwsS6_o/
Probably majority AI, given the organizations being given to and the distribution of funding. This contrasts with the non-GWWC EG organizations in Europe, where I believe there is a much greater focus on climate, mainly to meet donors where they are at.
Hi Nicolaj,
Thanks for sharing! That's really interesting. Couple of thoughts:
(1) For us, CEARCH uses n=1 when modelling the value of income doublings, because we've tended to prioritize health interventions where the health benefits tend to swamp the economic benefits anyway (and we've tended to priortize health interventions because of the heuristic that the NCDs are a big and growing problem which policy can cheaply combat at scale, vs poverty which by the nature of economic growth is declining over time).
(2) The exception is when modelling the counterfactual value of government spending, which a successful policy advocacy intervention redirects, and has to be factored in, albeit at a discount to EA spending, and while taking into account country wealth (https://docs.google.com/spreadsheets/d/1io-4XboFR4BkrKXgfmZHQrlg8MA4Yo_WLZ7Hp6I9Av4/edit?gid=0#gid=0).
There, the modelling is more precise, and we use n=1.26 as a baseline estimate, per Layard, Mayraz and Nickell's review of a couple of SWB surveys (https://www.sciencedirect.com/science/article/abs/pii/S0047272708000248). Would be interested in hearing how your team arrived at n=1.87 - I presume this is a transformation of an initial n=1 based on your temporal discounts?
Cheers,
Joel
It's true that people with abhorrent views in one area might have interesting or valuable things to say in other areas - Richard Hanania, for example, has made insightful criticisms of the modern American right.
However, if you platform/include people with abhorrent views (e.g. "human biodiversity", the polite euphemism for the fundamentally racist view some racial groups have lower IQ than others - which is a view held by a number of Manifest speakers), you run into the following problem - that the bad chases out the good.
The net effect of inviting in people with abhorrent views is that it turns off most decent people, either because they morally object to associating with such abhorrent views, or because they just don't want the controversy. You end up with a community with an even smaller percentage of decent people and a higher proportion of bigots and cranks, which in turn turns off even more decent people, and so on and so forth. Scott Alexander himself says it best in his article on witches:
The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
At the end of the day, platforming anyone whatsoever will leave you only with people rejected by polite society, and being open to all ideas will leave you with only the crank ones.
Generally, they have a combination of the following characteristics: (a) a direct understanding of what their own grantmaking organization is doing and why, (b) deep knowledge of the object-level issue (e.g. what GHD/animal welfare/longtermist projects to fund, and (c) extensive knowledge of the overall meta landscape (e.g. in terms of what other important people/organizations there are, the background history of EA funding up to a decade in the past, etc).
For how expected donations generated by a dollar evolves over time (ignoring discounts), available evidence suggests that it's flat (and so the graph is just a horizontal line terminating around 30 years later). There's a lot of uncertainty, not least on how long the giving lasts, given that we can only observe a little more than a decade of giving at this point.