Bayes Shammai

23 karmaJoined


and unlike typical financial investments reputational investments can go negative.

This is true; but the way more significant contributing factor of this sort is that impact on the world can go negative. We had more at stake because we think that defrauding customers is a huge harm to the world, and the purpose of investing in SBF is to create positive impact on the world. The market for FTX/FTT doesn't price in negative impact on humankind.

There's some discussion of whether implementing impact certificate markets—which might be more of an academic curiosity at this point—would have similar problems, where translating a utility function that goes negative (impact on the world) into one with a lower bound of zero (financial) would incentivize negative projects. As far as I can tell, cash prizes for positive impact projects have the same fundamental problem, though I'd love to be corrected here if I'm missing something. One way around this would be requiring a form of insurance (prior to entering impact markets, prize competitions,  earning-to-give careers, AI-capabilities-research-in-the-interest-of-alignment, etc), though I think there are a lot of both practical and and incentive-flavored barriers to these emerging any time soon.

I'm curious whether there are other areas in EA where we systematically miss the necessity of oversight for protection against negative outcomes that we care about, where markets / regulatory and legal systems / social norms will be predictably insufficient watchdogs.