oh cool! (Also I'm glad you proactively acknowledge the eg saturated fat)
Also, are there risks to over-reduction in salt intake?
Hmmmm. I'm suspicious is because it doesn't make any sense for anyone to decide what's best for me. (Sure, educate me instead, whatever.) (I'm particularly suspicious of this because of the discourse I've seen around proposed 'meat taxes', typically pedaled by people who think the climate and nutritional (and ethical) effects are far worse than I think they are. So I'm worried about the same thing here.)
Couple things (I've only skimmed the post):
(I have more questions like this but I'll leave just these three for now.)
Are you thinking that the community should consider collectively insuring against the risk of a megadonor going underwater in the future?
Yeah something like that. Just trying to think of a way to make a market out of due-diligence.
Is there an insurance product that covers clawbacks?
Hm, how could this interact with hypothetical clawbacks?
The title of this made me think "ways to buy time [with money in your personal life]" rather than "ways to buy time [in AI safety]"
Animals do this intuitively:
Pigeons were presented with two buttons in a Skinner box, each of which led to varying rates of food reward. The pigeons tended to peck the button that yielded the greater food reward more often than the other button, and the ratio of their rates to the two buttons matched the ratio of their rates of reward on the two buttons.
I'm glad you wrote this! I was worried about your previous post, and was thinking about writing something on this dimension myself.
It's funny: this could've been mostly avoided by a consideration of Chesterton's Fence and the EMH? ("If AGENCY was so good, why wouldn't everyone do it?")
Anyways, I'm now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.
Thank you for writing about what pushed me away from the EA community. The force to make all else instrumental. Best of luck