For posterity, the only data I've seen on this question suggests that this has not played out the way the OP and many others (myself included) might have expected. The economist ran an article* which links to this paper**. In short, cities with protests did not record discernible COVID case growth, at least as of a few weeks later. Moreover, quoting the paper (italics in original):
"Second, where there are social distancing effects, they only appear to materialize after the onset of the protests. Specifically, after the outbreak of an urban protest, we find, on average, an increase in stay-at-home behaviors in the primary county encompassing the city. That overall social distancing behavior increases after the mass protests is notable, as this finding contrasts with the general secular decline in sheltering-at-home taking place across the sample period (see Appendix Figure 6). Our findings suggest that any direct decrease in social distancing among the subset of the population participating in the protests is more than offset by increasing social distancing behavior among others who may choose to shelter-at-home and circumvent public places while the protests are underway. "
In other words, it seems that protestors being outside was more than offset by other people avoiding the protests and staying home.
Pablo already replied, but FWIW I had the same irritation (and similarly had all posts pointed out to me by someone else after complaining to them about it). I think in my case the original assumption was that 'latest posts' meant what it sounds like, and on discovering that it wasn't I (lazily) assumed there wasn't a way to get what I wanted.
I don't have a constructive suggestion for a better name though.
I agree with this. I would have assumed they would do (i), and other responses from people who actually read the paper make me think it might effectively be (iii). I don't think it's (ii).
If a climate change intervention has a cost-effectiveness of $417 / X per tonne of CO2 averted, then it is X times as effective as cash-transfers.
Wait a second.
I'm very confused by this sentence. Suppose, for the sake of argument, that all the impacts of emitting a tonne of CO2 are on people about as rich as present-day Americans, i.e. emitting a tonne of CO2 now causes people of that level of wealth to lose $417 at some point in the future. There is then no income adjustment necessary (I assume everything is being converted to something like present-day USD for present-day Americans, but I'm not actually sure and following the links didn't shed any light), so the post-income-adjustment number is still $417. Also suppose for the sake of argument that we can prevent this for $100.
This seems clearly worse than cash transfers to me under usual assumptions about log income being a reasonable approximation to wellbeing (as described in your first appendix), since we are effectively getting a 4.17x multiplier rather than a 50-100x multiplier. Yet the equation in the quote claims it is 4.17x more effective than cash transfers*.
What am I missing?
*Mathematically, I think the equation works iff. the cash transfers in question are to people of comparable wealth to whatever baseline is being used to come up with the $417 figure. So if the baseline is modern-day Americans, that equation calculates how much better it is to avert CO2 emissions than to transfer cash to modern-day Americans.
Quick note on the 'bunching' hypothesis. While that particular post and suggestion is mostly an artefact of the US tax code and would lead to years that look like 20%/0%/20%/0%/etc., there's a similar-looking thing that can happen for non-US GWWC members, namely that their tax year often won't align with the calendar year (e.g. UK is 6th April - 5th April, Australia is 1st July - 30th June I believe).
In these cases I would expect compliant pledge takers to focus on hitting 10% in their local tax year, and when the EA survey asks about calendar years the effect will be that the average for that group is around 10% but the actual percentage given will range anywhere from 0 - 20% (if ~10% is being given), but often look like 13% one calendar year, 8% the next, 11% the year after that, etc. In other words, they will appear to be meeting the pledge around 50% of the time in your data. Yet the pledge is being kept by all such members continuously through that period. Eyeballing your 2017 graph of the actual distributions of percentages given, there are a lot of people in the 8-10% range, who are the main candidates for this.
Since both most US members and most non-US members have good reasons to not hit 10% in every calendar year, the number I find most compelling is the one in the bunching section that averages 2015 and 2016 donations (and finds 69% compliance when doing so). But that number suffers from not knowing if those people were actually GWWC members in 2015. It just knows they were members when they took the survey in 2017. GWWC had large growth around that time, so that's a thorny issue. Then the 2018 survey solves the 'when did they join' problem but can't handle any level of donations not exactly aligning with the 2017 calendar year.
My best guess thinking over all this would be that 73% of the GWWC members in this EA survey sample are compliant with the pledge, with extremely wide error bars (90% confidence interval 45% - 88%). I like Jeff's suggestion below as a way to start to reduce those error bars.
Fair enough. I remain in almost-total agreement, so I guess I'll just have to try and keep an eye out for what you describe. But based on what I've seen within EA, which is evidently very different to what you've seen, I'm more worried about little-to-zero quantification than excessive quantification.
I'm feeling confused.
I basically agree with this entire post. Over many years of conversations with Givewell staff or former staff, I can't readily recall speaking to anyone affiliated with Givewell who I can identify that they would substantively disagree with the suggestions in this post. But you obviously feel that some (reasonably large?) group of people disagrees with some (reasonably large?) part of your post. I understand a reluctance to give names, but focusing on Givewell specifically as much of their thoughts on these matters are public record here, can you identify what specifically in that post or the linked extra reading you disagree with? Or are you talking to EAs-not-at-Givewell? Or do you think Givewell's blog posts are reasonable but their internal decision-making process nonetheless commits the errors they warn against? Or some possibility I'm not considering?
I particularly note that your first suggestion to 'entertain multiple models' sounds extremely similar to 'cluster thinking' as described and advocated-for here, and the other suggestions also don't sound like things I would expect Givewell to disagree with. This leaves me at a bit of a loss as to what you would like to see change, and how you would like to see it change.
>Also, not to mention all the career paths that aren't earning to give or "work in an EA org"
While I share your concern about the way earning to give is portrayed, I think this issue might be even more pressing.
I agree with this summary. Thanks Peter and sorry for the wordiness Milan, that comment ended up being more of a stream of consciousness that I’d intended.