All of Milan_Griffes's Comments + Replies

There's a new paper on jhana (in Cerebral Cortex) out of Matthew Sacchet's Harvard Center: Fu Zun Yang et al. 2023 

Got it, thanks. I'm interested in the cattle analysis because cows yield ~4x more meat than pigs per slaughter, and could perform even better than that when factoring in cognition. 

This is beautiful, thank you for creating it. 

Did you look at cows as part of the analysis? 

6
Bob Fischer
7mo
We don't, I'm sorry to say. The numbers would be comparable to pigs, but because cows are farmed in such low numbers by comparison, we didn't prioritize them. I know we need to extend the analysis, given how many people have asked about cattle!

Apart from pivoting to “x-risk”, what else could we do?


Cultivate approaches to heal psychological wounds and get people above baseline on ability to coordinate and see clearly. 

CFAR was in the right direction goalwise (though its approach was obviously lacking). EA needs more efforts in that direction. 

When is the independent investigation expected to complete? 

9
David M
1y
In the post Will said:

I wrote a thread with some reactions to this. 

(Overall I agree with Tyler's outlook and many aspects of his story resonate with my own.) 

(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19

10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang 

See discussion in this thread 


11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that 

This one feels like it requires substantial unpacking; I'll probably expand on it further at some point. 

Essentially the existing power structure is composed of organizations (mostly l... (read more)

But I have a feeling that the community takes revenge on him for all the tension the recent events left. This is cruel. I’m honestly worried if the guy is ok. Hope he is. 

The scapegoat mechanism comes to mind: 

The key to Girard's anthropological theory is what he calls the scapegoat mechanism. Just as desires tend to converge on the same object, violence tends to converge on the same victim. The violence of all against all gives way to the violence of all against one. When the crowd vents its violence on a common scapegoat, unity is restored. Sacrificial rites the world over are rooted in this mechanism.

I wrote in this direction a few years ago, and I'm very glad to see you clearly stating these points here. 

From What's the best structure for optimal allocation of EA capital? – 

So EA is currently in a regime wherein the large majority of capital flows from a single source, and capital allocation is set by a small number of decision-makers.

Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably

... (read more)

... there is a lot we can actually do. We are currently working on it quite directly at Conjecture

I was hoping this post would explain how Conjecture sees its work as contributing to the overall AI alignment project, and was surprised to see that that topic isn't addressed at all. Could you speak to it?

Isn't the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question? 

(I'm not trying to antagonize here – I'm genuinely trying to understand the decision-making of EA leadership better as I think it's very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to... (read more)

Thanks, Claire. Can you comment on why Nick Beckstead and Will MacAskill were recused rather than placed on leaves of absence? 

Our impression when we started to explore different options was that one can’t place a trustee on a leave of absence; it would conflict with their duties and responsibilities to the org, and so wasn’t a viable route.

Thanks, Nicole! It's helpful to hear updates like this from EA leadership in the midst of all these scandals. 

Can you comment on why Nick Beckstead was recused rather than placed on a leave of absence?

Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical.

Anonymity is not useful solely for preserving the option to join the critiqued group. It can also help buffer against reprisal from the critiqued group.  

See Ben Hoffman on this (a): 

"Ayn Rand is the only writer I've seen get both these ... (read more)

I don't think snark cuts against quality, and we come from a long lineage of it

5
Lukas_Gloor
1y
Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.) 

It seems like we're talking past each other here, in part because as you note we're referring to different EA subpopulations: 

  1. Elite EAs who mentored SBF & incubated FTX
  2. Random/typical EAs who Cremer would hang out with at parties 
  3. EA grant recipients 

I don't really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we've mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX's operations. 

I think we sho... (read more)

Wow "Asana Philanthropy Fund" makes the comparison so sharp. 

Thanks. I think Cowen's point is a mix of your (a) & (b). 

I think this mixture is concerning and should prompt reflection about some foundational issues.

l question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd. 

Two things: 

  1. Sequoia et al. isn't a good benchmark – 

    (i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between dept
... (read more)
3
Lukas_Gloor
1y
I think your comment would've been a lot stronger if you had left it at 1. Your second point seems a bit snarky. 
  1. a. 
    Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients. 

    b.
    I would expect the funds to have much better expertise in something like "evaluating the financial health of a company".  

    Also it seem you are somewhat shifting the goalposts: Zoe's paragraph with "On Halloween this past year, I was hanging out with a few EAs." It is reasonable to assume the reader will interpret it as hanging out with basically random/typical EAs, and the argument should hold for these people.  Your ar
... (read more)

I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a): 

Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.  And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).  When i

... (read more)

I previously addressed this here.

In particular, the shot at Cold Takes being "incomprehensible" didn't sit right with me - Holden's blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.

Agree that her description of Holden's thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as 'radically unfamiliar... a future galaxy-wide civilization... seem[ing] too "wild" to take seriously... we live in a wild time, and should be ready for ... (read more)

So, out of your list of 5 organizations, 4 of them were really very much quite bad for the world, by my lights, and if you were to find yourself to be on track to having a similar balance of good and evil done in your life, I really would encourage you to stop and do something less impactful on the world. 

This view is myopic (doesn't consider the nth-order effects of the projects) and ahistorical (compares them to present-day moral standards rather than the counterfactuals of the time). 

Your previous post demonstrated much stronger reasons to not trust you than those you accused of being untrustworthy. 

... strikes me as "not nice" fwiw, though overall it's been cool to see how you've both engaged with this conversation.

Probably Good is a reasonable counterexample to my model here (though it's not really a direct competitor – they're aiming at a different audience and consulted with 80k on how to structure the project).  

It'll be interesting to see how its relationships with 80k and Open Phil develop as we enter a funding contraction. 

I'm curious to read some of the reasoning of those who disagreed with this, as I'm currently high-conviction on these recommendations but feel open to updating substantially (strong beliefs weakly held). 

If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!

My model is that no one would try to formally stop this effort (i.e. via a lawsuit), though it would receive substantial pushback in the form of: 

  • Private communication discouraging the effort 
  • Organizers of the effort excluded and/or removed from coordinating fora, such as EA slack groups 
  • Public writing suggesting that the effort be rolled into the existing EA movement 
  • Attempts (by
... (read more)
2
ludwigbald
1y
I'd disagree. Probably Good, a direct competitor to 80k, is overall supported by the community, though it gets less support than 80k. CEA goes out of their way to solicit competition in their new update. But probably a competitor to CEA would not end up being fiscally sponsored by EVF, and would receive less support than EVF. However, I think instead of starting new orgs, the EA community should first try to improve the ones we have today.

I don't follow what you're pointing to with "beholden to the will of every single participant in this community."

My point is that CEA was established as a centralizing  organization to coordinate the actions and branding of the then-nascent EA community. 

Whereas Luke's phrasing suggests that CEA drove the creation of the EA community, i.e. CEA was created and then the community sprung up around it. 

CEA was setup before there was an EA movement (the term "effective altruism" was invented while setting up CEA to support GWWC/80,000 Hours).

The coinage of a name for a movement is different from the establishment of that movement. 

That's true, but before the brand "Effective Altruism" existed, there was no reason why starting an organisation using that name should have made the founders beholden to the will of every single participant in this community - you'd need to conjecture a pretty unreasonable amount of foresight and scheming to think that even back then the founders were trying to structure these orgs in a manner designed to maintain central control over the movement.

If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!

Another conflict-of-interest vector is that EVF board members could influence funding to EVF sub-orgs via other positions they hold, e.g. Open Phil (where Claire Zabel works as a senior program officer) funds CEA (a sub-org of EVF, where Claire is a board member).  

I'm hesitant about this angle. It seems to be reasonably common for major funders to get a seat on the board of nonprofits they fund, in order to give them more influence. (There was some good discussion of this elsewhere on the Forum recently, but I can't find it right now.)

Ah ha: 

https://ev.org/charity  (a

Effective Ventures Foundation is governed by a board of five trustees (Will MacAskill, Nick Beckstead, Tasha McCauley, Owen Cotton-Barratt, and Claire Zabel) (the “Board”). The Board is responsible for overall management and oversight of the charity, and where appropriate it delegates some of its functions to sub-committees and directors within the charity.

Who sits on the board of the Effective Ventures Foundation? 

5
Milan_Griffes
1y
Ah ha:  https://ev.org/charity  (a) 

"What actions would you like to see from EA organizations or EA leadership in the next few months?" 

  • Pausing new grant investigations 
  • Pausing public outreach and other attempts to grow the movement 
  • Something approximating a formal truth & reconciliation process 
  • More inner work (therapy, meditation, movement practices, self-directed reflection, time in nature, pursuit of Aristotelian leisure especially by working with one's hands) 
2
Milan_Griffes
1y
I'm curious to read some of the reasoning of those who disagreed with this, as I'm currently high-conviction on these recommendations but feel open to updating substantially (strong beliefs weakly held). 

I pulled it down for a while, and just reposted it

As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating. 

** the leaders of EA organizations are deciding not to say a lot right now... 

This post just seems like a snarky way of saying you disagree with their decision, but without offering any actual arguments against.

And there are a lot of reasons to decide not to say a lot right now.

Here are some jumping-off points for reflecting on how one might update their moral philosophy given what we know so far. 

From this July 2022 FactCheck article (a): 

Bankman-Fried has provided Protect Our Future PAC with the majority of its donations. The group has raised $28 million for the 2022 election cycle as of June 30, with $23 million from Bankman-Fried. Nishad Singh, who serves as head of engineering at FTX, has donated another $1 million

As of July 21, the PAC has spent $21.3 million on independent expendituresexclusively in Democratic primaries for House seats. 

This level of spending makes Protect Our Future PAC the third highest among outside spe

... (read more)

I mean, my primary guess here is Carrick. I don't think there was anyone besides Carrick who "decided" to make the Carrick campaign happen. 

People other than Carrick decided to fund the campaign, which wouldn't have happened without funding. 

3
Habryka
1y
Hmm, I don't know whether it wouldn't have happened without EA funding, but seems pretty plausible to me. I think campaign donations are public so maybe we can just see very precisely who made this decision. I also think on the funding dimension a bunch of EA leaders encouraged others to donate to the Carrick campaign in what seemed to me to be somewhat too aggressive.  I do also think there was a separate pattern around the Carrick campaign where for a while people were really hesitant to say bad things about Carrick or politics-adjacent EA because it maybe would have hurt his election chances, and I think that was quite bad, and I pushed back a bunch of times on this, though the few times I did push back on it, it was quite well-received. 

Thanks for this comment. 

I'm more interested in reflecting on the foundational issues in EA-style thinking that contributed to the FTX debacle than in abscribing wrongdoing or immorality (though I agree that the whole episode should be thoroughly investigated). 

Examples of foundational issues: 

  • FTX was an explicitly maximalist project, and maximization is perilous 
  • Following a utilitarian logic, FTX/Alameda pursued a high-leverage strategy (Caroline on leverage);  the decision to pursue this strategy didn't account for the massive ex
... (read more)

The issue is, we had a lot more on the line than their investors did. 

Big +1 

FTX is like Enron exploding in the center of EA. 

Here are some excerpts from Sequoia Capital's profile on SBF (published September 2022, now pulled). 

On career choice: 

Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money poss

... (read more)

Some corrections of the Sequoia info:

  • I've never been a grad student.
  • I'm neither Japanese nor a Japanese citizen.
  • I ‘volunteered’ in the sense that people at Alameda reached out to me, I said ok and then got paid by the hour for my help.
  • ‘(obscure, rural)’ is an exaggeration. ‘provincial’ would be a more apt adjective for the location. The main bank we used was SMBC, the second-largest bank in Japan.
  • ‘for a fee’ sounds as if it was some sort of bribe to get them to do what we wanted. But we only paid the usual transaction fees and margin that any bank wo
... (read more)
8
RobBensinger
1y
Sounds right to me! I agree with Eliezer that a lot of EAs are over-blaming EA for the FTX implosion, based on the facts currently known. But the Scholomance case is obviously a lot weaker than the EA case in real life, and this is a great summary of why.

Definitely: you are obviously right and Eliezer obviously wrong about this, imho. 


BUT

I do think it is hindsight bias to some degree to think that "EA" as a collective or Will MacAskill as an individual are recorded as doing something wrong, in the sense of "predictably a bad idea" at any point in the passages you quote. (I know you didn't actually claim that!) It's not immoral to tell some to found a business, so it's definitely not immoral to tell someone to found a business and give to charity. It's not immoral to help someone make a legal, non-scam... (read more)

If you say that your business model is to hold depositor funds 1:1 and earn money from fees, but in fact you sometimes earn money via making trades with depositor funds, then you would be misrepresenting your business model. 

JackM
1y71
19
0

Sure, and what is your point?

My current best guess is that WM quite reasonably understood FTX to be a crypto exchange with a legitimate business model earning money from fees - just like the rest of the world also thought. The fact that FTX was making trades with depositor funds was very likely to be a closely kept secret that no one at FTX was likely to disclose to an outsider. Why the hell would they - it's pretty shady business!

Are you saying WM should have demanded to see proof that FTX's money was being earned legitimately, even if he didn't have any ... (read more)

I asked some further questions in this direction here

Can you give some context on why Lightcone accepted a FTX Future Fund grant (a) given your view of his trustworthiness? 

So far I have been running on the policy that I will  accept money from people who seem immoral to me, and indeed I preferred getting money from Sam instead of Open Philanthropy or other EA funders because I thought this would leave the other funders with more marginal resources that could be used to better ends (Edit: I also separately thought that FTX Foundation money would come with more freedom for Lightcone to pursue its aims independently, which I do think was a major consideration I don't want to elide).

To be clear, I think there is a reasonabl... (read more)

I think it's good practice to try to understand a project's business model and try to independently verify the implications of that model before joining the project. 

This seems to be “not even wrong” - FTX’s business model isn’t and never was in question. The issue is Sam committing fraud and misappropriating customer funds, and there being a total lack of internal controls at FTX that made this possible.

My understanding is that FTX's business model fairly straightforwardly made sense? It was an exchange, and there are many exchanges in the world that are successful and probably not fraudulent businesses (even in crypto - Binance, Coinbase, etc). As far as I can tell, the fraud was due to supporting specific failures of Alameda due to bad decisions, but wasn't inherent to FTX making any money at all?

Load more