I previously estimated that 1-2% of YCombinator-backed companies with valuations over $100M had serious allegations of fraud.
While not all Giving Pledge signatories are entrepreneurs, a large fraction are, which makes this a reasonable reference class. (An even better reference class would be “non-signatory billionaires”, of course.)
My guess is that YCombinator-backed founders tend to be young with shorter careers than pledgees, and in part because of this will likely have had fewer run-ins with the law. I think better reference class would be so...
You asked for examples of advocacy done well with respect to truth-seekingness/providing well-considered takes, and I provided examples.
I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:
I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.
Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.
As someone who runs one of EAs advocacy contingents, I think the overall idea of more criticism is probably a good idea (though I suspect I'll find it personally unpleasant when applied to things I work on), but I'd suggest a few nuances I think exist here:
One issue I have with this is that when someone calls this the 'default', I interpret them as implicitly making some prediction about the likelihood of such countermeasures not being taken. The issue then that this is a very vague way to communicate one's beliefs. How likely does some outcome need to be for it to become the default? 90%? 70%? 50%? Something else?
The second concern is that it's improbable for minimal or no safety measures to be implemented, making it odd to set this as a key baseline scenario. This belief is supported by substantial e...
The term 'default' in discussions about AI risk (like 'doom is the default') strikes me as an unhelpful rhetorical move. It suggests an unlikely scenario where little-to-no measures are taken to address AI safety. Given the active research and the fact that alignment is likely to be crucial to unlocking the economic value from AI, this seems like a very unnatural baseline to frame discussions around.
I've updated the numbers based on today's predictions. Key updates:
I agree the victim-perpetrator is an important lens through which to view this saga. But, I also think that an investor-investee framing is another important one; a framing that has different prescriptions for what lessons to take away, and what to do next. The EA community staked easily a billion dollars worth of its assets (in focus, time, reputation, etc.), and ended up losing it all. I think it's crucial to reflect on whether the extent of our due diligence and risk management was commensurate with the size of EA's bet.
One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.
For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Al...
I believe that, even in the face of this particular disaster, who EAs are fucking is none of EA's business. There are very limited exceptions to this rule like "maybe don't fuck your direct report" or "if you're recommending somebody for a grant, whom you have fucked, you ought to disclose this fact to the grantor" or "Notice when somebody in a position of power seems to be leaving behind a long trail of unhappy people they've fucked", plus of course everything that shades over into harrassment, assault, and exploitation - none of which are being sug...
Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links - it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.
This is insightful. Some quick responses:
I won the Stevenson prize (a prize given out at my faculty) for my performance in the MPhil in Economics. I gather Amartya Sen won the same prize some 64 years ago, which I think is pretty cool.
>I'm mostly just noting that "altruistic people don't commit crimes" doesn't seem like a likely hypothesis.
I think your data is evidence in favor of a far more interesting conclusion.