This is a special post for quick takes by Wei Dai. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Missed opportunity for EA: I posted my coronavirus trade in part to build credibility/reputation, but someone should have done it on a larger scale, for example taken out a full page ad in the NY Times in the very early stages of the outbreak to warn the public about it. Then the next time EAs need to raise the alarm about something even bigger, they might be taken a lot more seriously. It's too late now for this outbreak, but keep this in mind for the future?
Someone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn't give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about?
for a related idea.)
I think this means we have to find other ways of explaining/modeling charity giving, including the kind encouraged in the EA community.
As a specific case, counterfactual donation matches should cause you to donate more, too.
It could be the case that people's utility functions are pretty sharp near X% of income, so that new information makes little difference. They're probably directly valuing giving X% of income, perhaps as a personal goal. Some might think that they are spending as much as they want on themselves, and the rest should go to charity.
Missed opportunity for EA: I posted my coronavirus trade in part to build credibility/reputation, but someone should have done it on a larger scale, for example taken out a full page ad in the NY Times in the very early stages of the outbreak to warn the public about it. Then the next time EAs need to raise the alarm about something even bigger, they might be taken a lot more seriously. It's too late now for this outbreak, but keep this in mind for the future?
+1
Such a good point. "Courage of our convictions" and all that...
A post that I wrote on LW that is also relevant to EA: What determines the balance between intelligence signaling and virtue signaling?
Someone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn't give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about? for a related idea.)
I think this means we have to find other ways of explaining/modeling charity giving, including the kind encouraged in the EA community.
As a specific case, counterfactual donation matches should cause you to donate more, too.
It could be the case that people's utility functions are pretty sharp near X% of income, so that new information makes little difference. They're probably directly valuing giving X% of income, perhaps as a personal goal. Some might think that they are spending as much as they want on themselves, and the rest should go to charity.
https://slate.com/human-interest/2011/01/go-ahead-give-all-your-money-to-charity.html
Or maybe their utility functions just change with new information?