DM

Dustin Moskovitz

2150 karmaJoined

Comments
58

I've long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I'm only trying to get back into integrity with my own.

OP is not an abstraction, of course, and I hope you continue talking to the individuals you know and have known there.

The question is inseparable from the lack of other donors. Of course it is true right now, because they have no one else to refer the grants to.

My hope is that having other donors for OP would genuinely create governance independence as my apparent power comes from not having alternate funding sources*, not from structural control. Consequently, you and others lay blame on me even for the things we don't do. I would be happy to leave the board even, and happy to expand it to diminish my (non-controlling) vote further. I did not want to create a GVF hegemony any more than you wanted one to exist. (If the future is a bunch of different orgs, or some particular "pure" org, that's good by me too; I don't care about OP aggregating the donors if others don't see that as useful.)

But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don't want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.

* Notably just for the "weird" stuff. We do successfully partner with other donors now! I don't get in their way at all, as far as I know.

That could well be, but my experience was having another foundation, like FTX, didn't insulate me from reputation risks either. I'm just another "adherent of SBF's worldview" to outsiders.

I'd like to see a future OP that is not synonymous with GVF, because we're just one of the important donors instead of THE important donor, and having a division of focus areas currently seems viable to me. If other donors don't agree or if staff behaves as if it isn't true, then of course it won't happen.

>> In my reading of the thread, you first said "yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement."

I agree people are paraphrasing me like this. Let's go back to the quote I affirmed: "Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari's public reputation in an adverse way, or are generally "weird" in a way that might impose more costs on Dustin and Cari."

I read the part after "or" as extending the frame beyond reputation risks, and I was pleased to see that and chose to engage with it. The example in my comment is not about reputation. Later comments from Oliver seem to imply he really did mean just PR risk so I was wrong to affirm this.

If you look at my comments here and in my post, I've elaborated on other issues quite a few times and people keep ignoring those comments and projecting "PR risk" on to everything. I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I'm going to stop now.  [Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.

>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.

Indeed, if I have the time is precisely the problem. I can't know everyone in this community, and I've disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don't sufficiently trust OPs judgement either.

There is no objective "view from EA" that I'm standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.

>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.

So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn't involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can't know all our grantees, and my estimation is I can't divorce myself from responsibility for them, reputationally or otherwise

After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don't want to be responsible for that community anymore, even if it seems to have positive EV.

And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is "no, it is the constituents who are wrong!"

I'm not detailing specific decisions for the same reason I want to invest in fewer focus areas: additional information is used as additional attack surface area. The attitude in EA communities is "give an inch, fight a mile". So I'll choose to be less legible instead.

It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don't think we have a history of lying as you seem to be implying here. I'm trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond "reputational risks", which narrows the mind too much on what is going on here).

>> In general, I think people value intellectual integrity a lot, and value standing up for one's values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one's intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.

I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.

My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.

I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don't really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.

Load more