All of Milan Griffes's Comments + Replies

given this, i'm curious how you yourself are making decisions about how to allocate your energy/effort.

4
Jim Buhler
I aim to contribute to efforts to 1. find alternative action-guidance when standard consequentialism is silent on what we ought to do. In particular, I'm interested in finding something different from (or more specific than) Clifton's "Option 3", DiGiovanni's non-consequentialist altruism proposal, or Vinding's two proposals. 2. reduce short-term animal suffering or finding out how to do so in robust ways, since I suspect most plausible solutions to 1 say it's quite a good thing to do, although maybe not the best (and we might need to do 1 to do 2 better. Sometimes, cluelessness bites even if we ignore long-term consequences --- e.g., the impact of fishing).

It follows that we should question whether the five attributes Griffes identifies are even good attributes to begin with.

what alternative(s) do you prefer? 

1
Jim Buhler
Alternatives to your five attributes in your analogy? I don't think there's any that longtermists have identified that is immune to the motivations for cluelessness. The best contender in people's minds might be "making sure the ship doesn't get destroyed," but I haven't encountered any convincing case for why we shouldn't be clueless about whether that's good (or bad) in the long run.[1] Then, it's tempting to say "let's try to do research and be less clueless" (predictive power) but even predictive power might turn out bad for all we know (in a complex-cluelessness way, not mere uncertainty). 1. ^ Some may even argue that we are clueless about _how_ to save the ship in the long run. See, e.g., DiGiovanni (2025), Schwitzgebel (2024), and Friederich (2025, §5).

oh the lottery isn't fictional – we're executing on the plan as stated! 

i was referring to the big growth in MAUs that started before July 2022 and peaked before January 2023.

4
AnonymousEAForumAccount
I think this is likely due to the huge amount of publicity that surrounded the launch of What We Owe the Future feeding into a peak associated with the height of the FTX drama (MAU peaked in November 2022), which has then been followed by over two years of ~steady decline (presumably due to fallout from FTX). Note that the "steady and sizeable decline since FTX bankruptcy" pattern is also evident in EA Funds metrics. 

why did MAUs spike in Q2 2022? something around FTX?

2
Sarah Cheng 🔸
I think the dip in ~May 2022 is most likely simply a data issue, if that's what you're referring to. Probably the real usage data is smoother.

relatedly how paths towards realizing the Long Reflection are most likely totalitarian

Peter Thiel touches on this point in a recent interview where he argues against Bostrom's vulnerable world hypothesis. 

Positive knock-on effects from funding animal welfare are likely far greater than from funding global health on the present margin. 

2
Michael St Jules 🔸
What knock-on effects do you have in mind?

There's a new paper on jhana (in Cerebral Cortex) out of Matthew Sacchet's Harvard Center: Fu Zun Yang et al. 2023 

Got it, thanks. I'm interested in the cattle analysis because cows yield ~4x more meat than pigs per slaughter, and could perform even better than that when factoring in cognition. 

This is beautiful, thank you for creating it. 

Did you look at cows as part of the analysis? 

6
Bob Fischer
We don't, I'm sorry to say. The numbers would be comparable to pigs, but because cows are farmed in such low numbers by comparison, we didn't prioritize them. I know we need to extend the analysis, given how many people have asked about cattle!

Apart from pivoting to “x-risk”, what else could we do?


Cultivate approaches to heal psychological wounds and get people above baseline on ability to coordinate and see clearly. 

CFAR was in the right direction goalwise (though its approach was obviously lacking). EA needs more efforts in that direction. 

When is the independent investigation expected to complete? 

9
David M
In the post Will said:

I wrote a thread with some reactions to this. 

(Overall I agree with Tyler's outlook and many aspects of his story resonate with my own.) 

(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19

10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang 

See discussion in this thread 


11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that 

This one feels like it requires substantial unpacking; I'll probably expand on it further at some point. 

Essentially the existing power structure is composed of organizations (mostly l... (read more)

But I have a feeling that the community takes revenge on him for all the tension the recent events left. This is cruel. I’m honestly worried if the guy is ok. Hope he is. 

The scapegoat mechanism comes to mind: 

The key to Girard's anthropological theory is what he calls the scapegoat mechanism. Just as desires tend to converge on the same object, violence tends to converge on the same victim. The violence of all against all gives way to the violence of all against one. When the crowd vents its violence on a common scapegoat, unity is restored. Sacrificial rites the world over are rooted in this mechanism.

I wrote in this direction a few years ago, and I'm very glad to see you clearly stating these points here. 

From What's the best structure for optimal allocation of EA capital? – 

So EA is currently in a regime wherein the large majority of capital flows from a single source, and capital allocation is set by a small number of decision-makers.

Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably

... (read more)

... there is a lot we can actually do. We are currently working on it quite directly at Conjecture

I was hoping this post would explain how Conjecture sees its work as contributing to the overall AI alignment project, and was surprised to see that that topic isn't addressed at all. Could you speak to it?

Isn't the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question? 

(I'm not trying to antagonize here – I'm genuinely trying to understand the decision-making of EA leadership better as I think it's very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to... (read more)

Thanks, Claire. Can you comment on why Nick Beckstead and Will MacAskill were recused rather than placed on leaves of absence? 

Our impression when we started to explore different options was that one can’t place a trustee on a leave of absence; it would conflict with their duties and responsibilities to the org, and so wasn’t a viable route.

Thanks, Nicole! It's helpful to hear updates like this from EA leadership in the midst of all these scandals. 

Can you comment on why Nick Beckstead was recused rather than placed on a leave of absence?

Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical.

Anonymity is not useful solely for preserving the option to join the critiqued group. It can also help buffer against reprisal from the critiqued group.  

See Ben Hoffman on this (a): 

"Ayn Rand is the only writer I've seen get both these ... (read more)

I don't think snark cuts against quality, and we come from a long lineage of it

5
Lukas_Gloor
Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.) 

It seems like we're talking past each other here, in part because as you note we're referring to different EA subpopulations: 

  1. Elite EAs who mentored SBF & incubated FTX
  2. Random/typical EAs who Cremer would hang out with at parties 
  3. EA grant recipients 

I don't really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we've mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX's operations. 

I think we sho... (read more)

Wow "Asana Philanthropy Fund" makes the comparison so sharp. 

Thanks. I think Cowen's point is a mix of your (a) & (b). 

I think this mixture is concerning and should prompt reflection about some foundational issues.

l question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd. 

Two things: 

  1. Sequoia et al. isn't a good benchmark – 

    (i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between dept
... (read more)
3
Lukas_Gloor
I think your comment would've been a lot stronger if you had left it at 1. Your second point seems a bit snarky. 
  1. a. 
    Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients. 

    b.
    I would expect the funds to have much better expertise in something like "evaluating the financial health of a company".  

    Also it seem you are somewhat shifting the goalposts: Zoe's paragraph with "On Halloween this past year, I was hanging out with a few EAs." It is reasonable to assume the reader will interpret it as hanging out with basically random/typical EAs, and the argument should hold for these people.  Your ar
... (read more)

I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a): 

Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.  And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).  When i

... (read more)

I previously addressed this here.

In particular, the shot at Cold Takes being "incomprehensible" didn't sit right with me - Holden's blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.

Agree that her description of Holden's thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as 'radically unfamiliar... a future galaxy-wide civilization... seem[ing] too "wild" to take seriously... we live in a wild time, and should be ready for ... (read more)

So, out of your list of 5 organizations, 4 of them were really very much quite bad for the world, by my lights, and if you were to find yourself to be on track to having a similar balance of good and evil done in your life, I really would encourage you to stop and do something less impactful on the world. 

This view is myopic (doesn't consider the nth-order effects of the projects) and ahistorical (compares them to present-day moral standards rather than the counterfactuals of the time). 

Your previous post demonstrated much stronger reasons to not trust you than those you accused of being untrustworthy. 

... strikes me as "not nice" fwiw, though overall it's been cool to see how you've both engaged with this conversation.

Probably Good is a reasonable counterexample to my model here (though it's not really a direct competitor – they're aiming at a different audience and consulted with 80k on how to structure the project).  

It'll be interesting to see how its relationships with 80k and Open Phil develop as we enter a funding contraction. 

I'm curious to read some of the reasoning of those who disagreed with this, as I'm currently high-conviction on these recommendations but feel open to updating substantially (strong beliefs weakly held). 

If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!

My model is that no one would try to formally stop this effort (i.e. via a lawsuit), though it would receive substantial pushback in the form of: 

  • Private communication discouraging the effort 
  • Organizers of the effort excluded and/or removed from coordinating fora, such as EA slack groups 
  • Public writing suggesting that the effort be rolled into the existing EA movement 
  • Attempts (by
... (read more)
2
ludwigbald
I'd disagree. Probably Good, a direct competitor to 80k, is overall supported by the community, though it gets less support than 80k. CEA goes out of their way to solicit competition in their new update. But probably a competitor to CEA would not end up being fiscally sponsored by EVF, and would receive less support than EVF. However, I think instead of starting new orgs, the EA community should first try to improve the ones we have today.

I don't follow what you're pointing to with "beholden to the will of every single participant in this community."

My point is that CEA was established as a centralizing  organization to coordinate the actions and branding of the then-nascent EA community. 

Whereas Luke's phrasing suggests that CEA drove the creation of the EA community, i.e. CEA was created and then the community sprung up around it. 

CEA was setup before there was an EA movement (the term "effective altruism" was invented while setting up CEA to support GWWC/80,000 Hours).

The coinage of a name for a movement is different from the establishment of that movement. 

That's true, but before the brand "Effective Altruism" existed, there was no reason why starting an organisation using that name should have made the founders beholden to the will of every single participant in this community - you'd need to conjecture a pretty unreasonable amount of foresight and scheming to think that even back then the founders were trying to structure these orgs in a manner designed to maintain central control over the movement.

If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!

Another conflict-of-interest vector is that EVF board members could influence funding to EVF sub-orgs via other positions they hold, e.g. Open Phil (where Claire Zabel works as a senior program officer) funds CEA (a sub-org of EVF, where Claire is a board member).  

I'm hesitant about this angle. It seems to be reasonably common for major funders to get a seat on the board of nonprofits they fund, in order to give them more influence. (There was some good discussion of this elsewhere on the Forum recently, but I can't find it right now.)

Ah ha: 

https://ev.org/charity  (a

Effective Ventures Foundation is governed by a board of five trustees (Will MacAskill, Nick Beckstead, Tasha McCauley, Owen Cotton-Barratt, and Claire Zabel) (the “Board”). The Board is responsible for overall management and oversight of the charity, and where appropriate it delegates some of its functions to sub-committees and directors within the charity.

Who sits on the board of the Effective Ventures Foundation? 

5
Milan Griffes
Ah ha:  https://ev.org/charity  (a) 

"What actions would you like to see from EA organizations or EA leadership in the next few months?" 

  • Pausing new grant investigations 
  • Pausing public outreach and other attempts to grow the movement 
  • Something approximating a formal truth & reconciliation process 
  • More inner work (therapy, meditation, movement practices, self-directed reflection, time in nature, pursuit of Aristotelian leisure especially by working with one's hands) 
2
Milan Griffes
I'm curious to read some of the reasoning of those who disagreed with this, as I'm currently high-conviction on these recommendations but feel open to updating substantially (strong beliefs weakly held). 

I pulled it down for a while, and just reposted it

As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating. 

** the leaders of EA organizations are deciding not to say a lot right now... 

This post just seems like a snarky way of saying you disagree with their decision, but without offering any actual arguments against.

Here are some jumping-off points for reflecting on how one might update their moral philosophy given what we know so far. 

From this July 2022 FactCheck article (a): 

Bankman-Fried has provided Protect Our Future PAC with the majority of its donations. The group has raised $28 million for the 2022 election cycle as of June 30, with $23 million from Bankman-Fried. Nishad Singh, who serves as head of engineering at FTX, has donated another $1 million

As of July 21, the PAC has spent $21.3 million on independent expendituresexclusively in Democratic primaries for House seats. 

This level of spending makes Protect Our Future PAC the third highest among outside spe

... (read more)
Load more