DM

Dustin Moskovitz

1533 karmaJoined Sep 2022

Comments
36

I don't feel I have much to say about that tbh, though I did talk about auditing financials here https://forum.effectivealtruism.org/posts/eRyC6FtN7QEkDEwMD/should-we-audit-dustin-moskovitz?commentId=qEzHRDMqfR5fJngoo

If we have another major donor with a more mysterious financial background than mine, we should totally pressure them to undergo an audit!

That said, I'm not convinced the next scandal will look anything like that, and the real problem to me was the lack of smoking guns. It's very hard to remove someone from power without that, as we've recently observed with sama, and continuously observe with Elon.

So the upshot is my prediction is we will again fail to identify and correct possible scandals, and I'm not sure we should beat ourselves up about it as much as we do. My post was more meant to soften the ground on that likely outcome so that we don't see it as a fatally damning tragedy when it happens, for EA or any other movement.

I understood what you meant before, but still see it as a bad analogy.

For context I saw many rounds of funding as a board member at Vicarious which was a pure lab for most of its life (and then later attempted robotics but that small revenue actually devalued it in the eyes of investors). There, what it took was someone getting excited about the story and smaller performance milestones along the way.

Again, why does it have to be X=$1B and probability 1?

It seems like if the $30M mattered, then the counterfactual is that they needed to be able to raise $30M at the end of their runway, at any valuation, rather than $1B, in order to bridge to the more impressive model. There should be a sizeable gap in what constitutes a sufficiently impressive model between those scenarios. In theory they also had "up to $1B" in grants from their original funders, including Elon, that should have been possible to draw on if needed.

How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, esp. if they had great professional backgrounds as was clearly the case with OAI. Seems like an unnecessary assumption on top of other unnecessary assumptions.

Why do you believe that’s binary? (Vs just less funding/smaller valuation at the first round)

Yes that’s my position. My hope is we actually slowed acceleration by participating but I’m quite skeptical of the view that we added to it.

Unless it's a hostile situation (as might happen with public cos/activist investors), I don't think it's actually that costly. At seed stage, it's just kind of normal to give board seats to major “investors”, and you want to have a good relationship with both your major investors and your board.

The attitude Sam had at the time was less "please make this grant so that we don't have to take a bad deal somewhere else, and we're willing to 'sell' you a board seat to close the deal" and more "hey would you like to join in on this? we'd love to have you. no worries if not."

I'm not sure what can be shared publicly for legal reasons, but would note that it's pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.

I believe the implicit premise of the question is something like "do those benefits outweigh the potential harms of the grant." Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I've gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.

Load more