GiveIndia says donations from India or the US are tax-deductible.
Milaap says they have tax benefits to donations but I couldn't find a more specific statement so I guess it's just in India?
Anyone know a way to donate with tax deduction from other jurisdictions? If 0.75x - 2x is accurate, it seems like for some donors that could make the difference.
(Siobhan's comment elsewhere here suggests that Canadian donors might want to talk to RCForward about this).
You've previously spoken about the need to reach "existential security" -- in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?
It seems plausible that reasonable people might disagree on whether student groups on the whole would benefit from being more or less conforming to the EA consensus on things. One person's "value drift" might be another person's "conceptual innovation / development".
On balance I think I find it more likely that an EA group would be co-opted in the way you describe than an EA group would feel limited from doing something effective because they were worried it was too "off-brand", but it seems worth mentioning the latter as a possibility.
I think this post doesn't explicitly recognize a (to me) important upside of doing this, which applies to doing all things that other people aren't doing: potential information value.
This post exists because people tried something different and were thoughtful about the results, and now potentially many other people in similar situations can benefit from the knowledge of how it went. On the other hand, if you try it and it's bad, you can write a post about what difficulties you encountered so that other people can anticipate and avoid them better.
By contrast, naming your group Effective Altruism Erasmus wouldn't have led to any new insights about group naming.
Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can't IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.
Even in an extremely empirically grounded and verifiable theory like physics, for much of the history of the field, the dominant theoretical framework has had significant omissions or blind spots that would occasionally lead to faulty results when applied to areas that were previously unknown. Economic theory is much less reliable. I think you're correct to highlight that economic data can be unreliable too, and it's certainly true that many people overestimate the size of Bayesian updates based on shaky data, and should perhaps stick to their priors more. But let's not kid ourselves about how good our cutting edge of theoretical understanding is in fields like economics and medicine – and let's not kid ourselves that nonspecialist amateurs can reach even that level of accuracy.
I agree with Halstead that this post seems to ignore the upsides of creating more humans. If you, like me, subscribe to a totalist population ethics, then each additional person who enjoys life, lives richly, loves, expresses themselves creatively, etc. -- all of these things make for a better world. (That said, I think that improving the lives of existing people is currently a better way to achieve that than creating more -- but I wouldn't say that creating more is wrong).
Moreover, I think this post misses the instrumental value of people, too. To understand the all-inclusive impact of an additional person on the environment, you surely have to also consider the chance that they become a climate researcher or activist, or a politician, or a worker in a related technical field; or even more indirectly, that they contribute to the social and economic environment that supports people who do those things. For sure, that social and economic environment supports climate damage as well, but deciding how these factors weigh up means (it seems to me) deciding whether human social and technological progress is good or bad for climate change, and that seems like a really tricky question, never mind all the other things it's good or bad for.
The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions.
This is not literally true: family planning can reduce emissions in the developed world if the desired births per woman is even lower than the actual births per woman. But I don't dispute the substance of the argument: it seems relatively difficult to claim that there's a big unmet need for contraceptives elsewhere, and that should determine what estimates we use for emissions.
I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in "First doses first", you present some new evidence that the priors were right.
On nutrition and lockdowns, you haven't convinced me that the point of view you're defending isn't the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.
On minimum wage and alcohol during pregnancy, you've presented a conflict between evidence and priors, but I don't feel like you resolved the conflict: someone who believed the evidence proved the priors wrong won't find anything in your examples to change their minds. For drinking during pregnancy, I'm not even really convinced there is a conflict: I suspect the heart of the matter is what people mean by "safe", what risks or harms are small enough to be ignored.
I think in general there are for sure some cases where priors should be given more weight than they're currently afforded. But it also seems like there are often cases where intuitions are bad, where "it's more complicated than that" tends to dominate, where there are always more considerations or open uncertainties than one can adequately navigate on priors alone. I don't think this post helps me understand how to distinguish between those cases.
I don't know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)
Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P
For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.
I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)