The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group. I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.

An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong. 


Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions. Moreover, not all of these are linked to SBF and that scandal, and many of these probably have perfectly rational explanation. 

Nonetheless, I think now is the appropriate time to ask difficult questions of the EA leadership, so this is just my list of said questions. I do apologise if people take offence to any of these (I know it is a difficult time for everyone), as we really are I am sure all trying our best, but nonetheless I do think we can only have as positive an impact as possible if we are really willing to examine ourselves and see what we have done wrong.


 

  1. Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?
  2. Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? [The following question has been answered]Who signed off on the decision to make the campaign manager someone with no political experience(edit: I have now recieved information that the campaign did their own hiring of a campaign manager and had experienced consultants assist through the campaign. So whether I agree with this or not, it seems the campaign manager is quite different from the issues I raise elsewhere in this post)
  3. Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?
  4. Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?
  5. Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?
  6. The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?
  7. A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?
  8. FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not publicly disclosed? Why did they decide to not disclose such funding?
  9. What sort of coordination, if any, goes on around which EAs talk to the media, write highly publicised books, go in curricula etc? What is the decision making procedure like?
  10. The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?

240

New Comment
116 comments, sorted by Click to highlight new comments since: Today at 6:19 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I don't think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for: 

Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?

The Coordination Forum is a very loosely structured retreat that's been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule. 

At least as far as I can tell basically no decisions get made at Coordination Forum, and it's primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some bal... (read more)

I think it could be a cost-effective use of $3-10 billion (I don't know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it's not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam's net-worth at face-value at the time, this didn't seem like a crazy idea to me. 

The 15 billion figure comes from Will's text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then  Elon Musk asks, "Does he have huge amounts of money?" and Will replies, "Depends on how you define "huge." He's worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: "~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing"

It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the ti... (read more)

The 15 billion figure comes from Will's text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then  Elon Musk asks, "Does he have huge amounts of money?" and Will replies, "Depends on how you define "huge." He's worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: "~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing"

Makes sense, I think I briefly saw that, and interpreted the last section as basically saying "ok, more than 8b will be difficult", but the literal text does seem like it was trying to make $8b+ more plausible. 

It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I've never seen 'buying up social media companies' as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I f

... (read more)
5S.E. Montgomery17d
Yeah, there could be some public stuff about this and I'm just not aware of it. And sorry, I wasn't trying to say that people are only allowed to say that something 'makes sense' after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it "makes sense." I'm not saying that I don't see potential value in purchasing Twitter, but I don't think a huge decision like that should be justified based on quick, post-hoc judgements. If SBF wanted to buy Twitter for non-EA reasons, that's one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We're a movement that prides itself on using evidence and reason to make the world better, after all.) Thanks for clarifying that - that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important.

If SBF wanted to buy Twitter for non-EA reasons, that's one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty.

If you think investing in Twitter is close to neutral from an investment perspective (maybe reasonable at the time, definitely not by the time Musk was forced to close) then the opportunity cost isn't really billions of dollars. Possibly this would have been an example of marginal charity.

7S.E. Montgomery17d
I can see where you're coming from with this, and I think purely financially you're right, it doesn't make sense to think of it as billions of dollars 'down the drain.' However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as: * Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal) * Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare - this is quite a shift from those into a space that is more about power and politics [https://www.washingtonpost.com/technology/2022/06/02/musk-twitter-tesla-china-impact/] , particularly given Musk's stated political views/aims leading up to this purchase) * How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.) * What were SBF's personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA's vision for the platform? * What trade offs would be made that would impact other cause areas?
6Marcus Rademacher16d
This is the bit I think was missed further up the thread. Regardless of whether buying a social media company could reasonably be considered EA, it's fairly clear that Elon Musk's goals both generally and with Twitter are not aligned with EA. MacAskill is allowed to do things that aren't EA-aligned, but it seems to me to be another case of poor judgement by him (in addition to his association with SBF).
1Aleksi Maunu17d
For what it's worth connecting SBF and Musk might've been a time sensitive situation for a reason or another. There would've also still been time to debate the investment in the larger community before the deal would've actually gone through.
1Aleks_K16d
Seems quite implausible to me that this would have happened and unclear if it would have been good. (Assuming "larger EA community" implies more than private conversations between a few people. )

My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out.  I don't imagine Will felt it any of his business to advise SBF as to whether or not this was a good move.  And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.

Part of the issue here is that people have been accounting the bulk of SBF's net worth as "EA money".  If you phrase the question as "Should EA invest in Twitter?" the answer is no.  EA should probably also not invest in Robinhood or SRM.  If SBF's assets truly were EA assets, we ought to have liquidated them long ago and either spent them or invested them reasonably.  But they weren't.

It's hard to read the proposal as only being motivated by a good business investment, because Will says in his opening DM:

Sam Bankman-Fried has for a while been potentially interested in purchasing it and then making it better for the world.

[sorry for multiple comments, seems better to split out separate points]

6jacquesthibs10d
I feel like anyone reaching out to Elon could say "making it better for the world" because that's exactly what would resonate with Elon. It's probably what I'd say to get someone on my side and communicate I want to help them change the direction of Twitter and "make it better."
3David Mears17d
Will helping SBF out is de facto making it more likely to happen, and so he should only do it if he thinks it's a good move.
2RobBensinger16d
I disagree with the implied principle. E.g., I think it's good for me to help animal welfare and global poverty EAs with their goals sometimes (when I'm in an unusually good position to help out), even though I think their time and money would be better spent on existential risk mitigation.
7David Mears16d
Agreed that a principle of 'only cooperate on goals you agree with' is too strong. On the object-level, if MacAskill was personally neutral or skeptical on the object-level question of whether SBF should buy Twitter, do you think he should have helped SBF out? When is cooperation inappropriate? Maybe when the outcome you're cooperating on is more consequential (in the bad direction, according to your own goals) than the expected gains from establishing reciprocity. This would have been the largest purchase in EA history, replacing much or most of FTXFF with "SBF owns part of Twitter". I think when the outcome is as consequential as that, we should hold cooperators responsible as if they were striving for the outcome, because the effects of helping SBF buy Twitter greatly outweigh the benefits from improving Will's relationship with SBF (which I model as already very good).
5RobBensinger15d
If Will had no reason to think SBF was a bad egg, then I'd guess he should have helped out even if he thought the thing was not the optimal use of Sam's money. (While also complaining that he thinks the investment is a bad idea.)
5Emrik13d
If Will thought SBF was a "bad egg", then it could be more important to establish influence with him, because you don't need to establish influence (as in 'willingness to cooperate') with someone who is entirely value-aligned with you.
2David Mears16d
You're right to say people had been assuming SBF's wealth belonged to EA: I had. In the legal sense it wasn't, and we paid a price for that. I think it was fair to argue that the wealth 'rightfully' belonged to the EA community, in the sense that SBF should defer to representatives of EA on how it should be used, and would be defecting by spending a few billion on personal interests. The reason for that kind of principle is to avoid a situation where EA is captured or unduly influenced by the idiosyncratic preferences of a couple of mega-donors.
1Dancer16d
Are you arguing that EA shouldn't associate with / accept money from mega-donors unless they give EA the entirety of their wealth?
3David Mears16d
The answer is different for each side of your slash. I see two kinds of relationships EA can have to megadonors: 1. uneasy, arms’ length, untrusting, but still taking their money 2. friendly, valorizing, celebratory, going to the same parties, conditional on the donor ceding control of a significant fraction of their wealth to a donor-advised fund (rather than just pledging to give)
1S.E. Montgomery17d
I agree that it's possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka's comment which said: If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that's the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn't feel comfortable with doing either of those.

Investing in assets expected to appreciate can be a form of earning to give (not that Twitter would be a good investment IMO). That's how Warren Buffett makes money and probably nobody in EA has criticized him for doing that. Investing in a for-profit something is very different and is guided by different principles from donating to something, because you are expecting to (at least) get your money back and can invest it again or donate it later (this difference is one of the reasons microloans became so hugely popular for a while).

On the downside, concentrating assets (in any company, not just Twitter) is a bad financial strategy, but on the upside, having some influence at Twitter could be useful to promote things like moderation rules that improve the experience of users and increase the prevalence of genuine debate and other good things on the platform.

Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.

I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.

It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:

"I don’t know, I kind of like nice apartments. ... I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets."

Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.

In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.

Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.

9Gideon Futerman9d
Hey Rob, Thanks for your in depth response to this question by the way, its really appreciated and exactly what I was looking for from this post! It is pretty strange that no one reached out to you in a professional capacity to correct this, but that certainly isn't your fault!
2Habryka8d
Makes sense, seems like a sad failure of communication :( Looks like on my side I had an illusion of transparency which made me feel like you must very likely know about this, which made me expect that a conversation about this would end up more stressful than it probably would have been. I expected that even if you didn't do it intentionally (which I thought was plausible, but even at the time not very likely), I still expected that there was some subconscious or semi-intentional bias that I would have had to deal with that would have made the conversation pretty difficult. I do know think it's very likely that the conversation would have just gone fine, and maybe would have successfully raised some flags. I do wonder whether there was some way to catch this kind of thing. I wonder whether if the podcasts would be reliably posted to the forum with transcripts (which I think would be a great idea anyways), there is a higher chance someone would have left a comment pointing out the inconsistency (I think I at least would have been more likely to do that). My guess is there are also various other lessons to take away from this, and I am interested in more detail on what you and other people at 80k did know about, but doesn't seem necessary to go into right now. I appreciate you replying here.

Separately from the FTX issue, I'd be curious about you dissecting what of Zoe's ideas you think are worth implementing and what would be worse and why.

 

My takes:

  • Set up whistleblower protection schemes for members of EA organisations  => seems pretty good if there is a public commitment from an EA funder to something like "if you whistleblow we'll cover your salary if you are fired while you search another job" or something like that
  • Transparent listing of funding sources on each website of each institution => Seems good to keep track of who receives money from who
  • Detailed and comprehensive conflict of interest reporting in grant giving => My sense is that this is already handled sensibly  enough, though I don't have great insight on grantgiving institutions
  • Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% => this seems bad for incentives and complicated to put into action
  • Within 5 years: EA funding decisions are made collectively => seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
  • No fireside chats at EAG with leaders. I
... (read more)

I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me. 

On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it's very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix. 

As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation). 

I think it's possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it's better to not do anything here than to not try exceptionally hard at making this go well). 

It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies.  I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies.  Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?

In practice, I notice two problems.  Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage.  Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.

4Greg_Colbourn17d
Not [https://forum.effectivealtruism.org/posts/efGNMe6uB87qXozXJ/ny-times-on-the-ftx-implosion-s-impact-on-ea?commentId=EdkRJxkq3otKXqFqd] always [https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx?commentId=byP6muwztkrwXBLMG] !

Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?

I would add that SBF and people around him decided to invest a lot of resources into this. As far as I can tell, he didn't seem interested in people's thoughts on whether this is a good idea. Most EAs thought it wasn't wise to spend so much on the campaign.

I also just made an edit after reflecting a bit more on it and talking to some other people: 

[edit: On more reflection and talking to some more people, my guess is there was actually more social pressure involved here than this paragraph implies. Like, I think it was closer to "a bunch of kind-of-but-not-very influential reached out to him and told him that they think it would be quite impactful and good for the world if he ran", and my updated model of Carrick really wasn't personally attracted to running for office, and the overall experience was not great for him]

Strong upvote here. I really like how you calmly assessed each of these in a way that feels very honest and has a all-cards-on-the-table feel to it. Some may still have reservations towards your comments given that you seem to at least somewhat fit into this picture of EA leadership, but this feels largely indicative of a general anger at the circumstances turned inwards towards EA that feels rather unhealthy. I certainly appreciate the OP as this does seem like a moment ripe for asking important questions that need answers, but don't forget that those in leadership are humans who make mistakes too, and are generally people who seem really committed to trying to do what everyone in EA is: make the world a better place.

I think it's right that those in leadership are humans who make mistakes, and I am sure they are generally committed to EA; in fact, many have served as real inspirations to me. Nonetheless, as a movement we were founded on the idea that good intentions are not enough, and somewhere this seems to be getting lost somehow. I have no pretentions I would do a better job in leadership than these people; rather, I think the way EA concentrates power (formally and even more so informally) in a relatively small and opaque leadership group seems problematic. To justify this, I think we would need these decisionmakers to be superhuman, like Platos Philosopher King. But they are not, they are just human.

a few times when people asked me whether to volunteer for the Carrick campaign, I said that seemed overall bad for the world

Why?

A few things (I will reply in more detail in the morning once I have worked out how to link to specific parts of your text in my comment). These comments do appear a bit blunt, and I do apologies, they are blunt for clarity sake rather than to imply aggressiveness or rudeness.

  • With regards to the Coordination Forum, even if no "official decisions" get worked out, how impactful over the overall direction of the movement do you think it is? Anyway, why are the attendees of this not public? If the point is to build trust between those community building and to understand the core gnarly disagreements, why is the people going and what goes on so secretive? 
  • Your Carrick Flynn answer sort of didn't really tell me which senior EA leaders if any encouraged Carrick to run/ knew before he announced etc, which is something I think is important to know. It also doesn't explain the decision around the choice of campaign manager etc.
  • With regards to buying twitter: whilst it is Will's right to do whatever he wants, it really does call into question whether it is correct for him to be the "leader of EA" (or EA to have a defacto leader in such a way). If he has that role, surely he has certain
... (read more)
9Geuss17d
Wait, what!? What's your source of information for that figure? I get hiring a research assistant or two, but $10m seems like two orders of magnitude too much. I can't even imagine how you would spend anywhere near that much on writing a book. Where did this money come from?

Definitely not 2 orders of magnitude too much.

The book was, in Will's words "a decade of work", with a large number of people helping to write it,  with a moderately large team promoting it (who did an awesome job!). There were a lot of adverts certainly around London for the book, and Will flew around the world to promote the book. I would certainly be hugely surprised if the budget was under $1 million (I know of projects run by undergraduates with budgets over a million!), and to be honest $10 million seems to me in the right ball park. Things just cost a lot of money, and you don't promote a book for free!

6Pablo10d
The source appears to be Émile P. Torres. Gideon, could you confirm that this is the case? Also, could you clarify if you ever reached out to Will MacAskill to confirm the accuracy of this figure?
2Gideon Futerman10d
I've heard it from a number of people saying it quite casually, so assumed it was correct as it's the only figure I heard banded around and didn't hear opposition to it. Just tried to confirm it and don't see it publicly, so it may be wrong. They may have heard it from Emile, I don’t know. So take it with a hefty pinch of salt then. I don't quite think I have the level of access to just randomly email Will MacAskill unfortunately to confirm it, but if someone could, that would be great. FYI I think it probably would have been a fantastic use of 10 million, which is why I also think its quite plausible
8Pablo10d
If you are unable to adduce any evidence for that particular figure, I think your reply should not be "take it with a hefty pinch of salt" but to either reach out to the person in a position to confirm or disconfirm it, or else issue a retraction.
1Habryka10d
I think a retraction would also be misleading (since I am worried it would indicate a disconformation). I think editing it to say that the number comes from unconfirmed rumors seems best to me. FWIW, a $10MM estimate seems in the right order of magnitude based on random things I heard, though I also don't have anything hard to go on (my guess is that it will have ended up less than $10MM, but I am like 80% confident it was more than $1.5MM, though again, purely based on vague vibes I got from talking to some people in the vague vicinity of the marketing campaign)

Why would a retraction be misleading? A valid reason for retracting a statement is failure to verify it. There is no indication in these cases that the statement is false.

If someone can't provide any evidence for a claim that very likely traces back to Emile Torres, and they can't be bothered to send a one-line email to Will's team asking for confirmation, then it seems natural to ask this person to take back the claim. But I'm also okay with an edit to the original comment along the lines you suggest.

4Habryka9d
Huh, I definitely read strikethrough text by default as "disconfirmed". My guess is I would be happy to take a bet on this and ask random readers what they think the truth value of a strike-through claim like this is. But in any case, seems like we agree that an edit is appropriate.
1Gideon Futerman9d
Well I have put an edit in there. Saying I "can't be bothered to send a one line email": I'm not a journalist and really didn't expect this post to blow up as much as it did. I am literally a 19 year old kid and not sure that Will's team will respond to me if I'm honest. Part of the hope for this post was to get some answers, which in some cases (ie Rob Wiblin- thanks!) i have got, but in others I haven't.
1Geuss10d
Honestly, I think it is fine to relay second-hand information, as long as it is minimally trustworthy - i.e., heard from multiple sources - and you clearly caveat it as such. This is a forum for casual conversation, not an academic journal or a court of law. In this case, too, we are dealing with a private matter that is arguably of some public interest to the movement. It would be great if these things were fully transparent in the first place, in which case we wouldn't have to depend on hearsay. With that said: now we have heard the figure of $10m, it would be nice to know what the real sum was. EDIT: Having just read Torres' piece, Halstead's letter to the editor, and the editorial note quoting Will's response, there is no indication that anyone has disputed the $10m figure with which the piece began. Obviously that does not make it true, but it would seem to make it more likely to be true. One thing I had not realised, though, was that this money could have been used for the promotion of the book as well as its writing.
3ESRogs17d
Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective? (Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn't? Or... what?)

My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:

  1. epistemology
    • ignore subjective probabilities assigned to credences in favor of unweighted beliefs.
    • plan not with probabilistic forecasting but with deep uncertainty and contingency planning.
    • ignore existential risk forecasts in favor of seeking predictive indicators of threat scenarios.
    • dislike ambiguous pathways into the future.
    • beliefs filter and priorities sort.
    • cognitive aids help with memory, cognitive calculation, or representation problems.
    • cognitive aids do not help with the problem of motivated reasoning.
  2. environmental destruction
    • the major environmental crisis is population x resources > sustainable consumption (overshoot).
    • climate change is an existential threat that can now sustain itself with intrinsic feedbacks.
    • climate tipping elements will tip this century, other things equal, causing civilizational collapse.
    • the only technology suitable to save humanity from climate change, given no movement toward degrowth, is nanotechnological manufacturing.
    • nanotechnology is so hazardous that humanity would be better off extinct.
    • pursuit of renewable energy and electrificati
... (read more)
3acylhalide16d
I wont be debating all your claims here but: IMO 1, 4, 5, 6 and 7 are worth discussing within the community. I don't know much about 3 or 8 so won't comment. I think your views on 2 are wrong but still worth discussing. On 6, I agree some EAs defer too much. On 5, EAs don't discuss AI totalitarianism as much, majorly because the AI risk community (mostly in bay area) believes xrisk is a far bigger problem. But I'm not sold xrisk conditional on AGI being deployed is >50% and as long as that's the case there is value in discussing what happens if we get AGi without the xrisk. I also think it's fine to basically ignore the bay area community and start your approach to the problem from scratch, there's a lot of benefit to be had from thinking uncorrelated with them. On 4, ethical debates are just hard in general I'm not sold they ever reach a stationary point where everyone agrees. So yes, more discussion of other viewpoints would be good. On 1, most EAs (and also myself) are partial to bayesian epistemology wherein you start with a prior distribution even when you have zero evidence, and update this distribution when you get more evidence. You could argue this is not a good way of handling knightian uncertainty and there's better ways, I would love to see that argument being maybe. There is a tiny bit of theoretical grounding for this approach, for instance see AIXI (a theoretical agent that does such reasoning with a solomonoff prior) and Infrabayesianism. But yeah, theoretical stuff may not apply as well to humans. On 2, I think you're quite likely wrong that climate change has a significant probability of causing civilisational collapse. The total number of deaths from climate change will likely be under 100 million this century, although there are some tail scenarios (such as a nuclear war that can indirectly be connected with a climate event). Panic over climate change however is a common view among the public so its probably worth it for EA to publish more r
7Noah Scales16d
Oh, well thank you for suggesting that my cringy ideas are worth conversation within the community! That's very kind of you. Those ideas of mine were already discussed here, at least by me, and with some exceptions, have been met with indifference or a disagreement checkmark. That's OK with me. I was led here by a couple of Peter Singer's books and then by Galef's "Scout Mindset", by the way. I have revised her model of Scout vs Solder, in my own mind, to encompass a broader category and additional partitions outside her model. In particular, when exploring an area of knowledge with others, we can perform in roles such as: * Truth-building roles: mutual truth-seeking involving exchange of truthful information * scout (explores information and develops truthful information for themselves) * soldier (attacks and defends ideas in a ways that self-convince of existing beliefs) * Manipulative roles: at least one side seeking to manipulate the other without regard for the other's interests * salesperson (sells ideas and gathers information) * actor/actress (performs theatrics and optionally gathers information) The Scout and Soldier model breaks down when people believe that: * the truth is cheap and readily accessible, and so communication about important topics should serve other purposes than truth-building. * everyone else is engaged in manipulating rather than truth-building, and so it's better to either withdraw or join everyone else in theatrics and sales. One of several lessons I drew from Galef's excellent work was the contrast between those who are self-serving and those who are open to contradiction by better information. However, a salesperson can gather truthful information from you, like a scout, develop an excellent map of the territory, and then lie to your face about the territory, leaving you with a worse map than before. Persons in the role of actors can accomplish many different goals with their
4acylhalide16d
Not sure if you're being sarcastic or not, but I was being genuine! I also love your model on scout versus solider. I agree it is possible to first honestly truth-seek and then use this knowledge to influence others in specific directions. If anything, that just seems like it could sometimes be the highest impact thing to do, for better or for worse. And the influencing itself could be anything from gentle nudges to sales/marketing to outright deception and manipulation. I would also be interested in your climate change resources, whether or not others in EA are.
1Noah Scales16d
No, I was not being sarcastic, acylhalide. Thanks. You're interested in climate change resources from me? OK, when I have the opportunity, providing an outline of such resources to the community could be a productive thing to do. Thanks again!
3acylhalide15d
Thanks, will wait for it!
7Noah Scales15d
Oh, you know, you could help me by giving me a little feedback on what you think the community would either find most interesting or most beneficial. Here is a list of resource links that I am considering for the post: 1. assessment reports, special reports, and synthesis/summary reports from the IPCC. 2. papers that are noted by some climate scientists. 3. workshops that I have viewed online. 4. software available for modeling. 5. scientists working on relevant topics that I follow online. 6. books that I have read. 7. documentaries that I have viewed. 8. news articles that I have read. 9. reports put out by nonprofits. The topics could cover: 1. climate change 2. pollution 3. agricultural practices 4. population changes 5. economics 6. politics 7. ecology I would like to know what you would find interesting from the list of resource links and the list of topics, by number works well, or just say "all" for all of them or "any" if you have no preference. If there's any you would particularly discount, let me know, and offer your reasons, if you like. Also let me know what other topics or types of resources would interest you. If you cannot do any of this right now, that's OK. I am backed up with stuff to do, it will be a little while. As far as resources that I have created, well: * I have been messing around a bit with some simple climate models and RCP projection data to simulate changes from tipping elements that raise GHG's this century (for example, methane hydrate leaks). * I have a basic understanding of the ideology and contexts that define those who favor environmental destruction as a necessary part of economic growth. * I have a simplistic model of how humans can stay within an ecological niche, rather than create their own geologic epoch, as we have done. * I have a historical account of climate change prevention efforts and their failures, but it has many gaps in it. * I see success not as based on appro
3acylhalide15d
Thanks for the detailed reply! Mostly I'd be interested in climate change itself (1). And I'd be interested in defence of the claim that it will cause civilisational collapse with high likelihood. For that I'm not sure which in 1-9 have best resources, you would know best! For instance if you're (significantly) disagreeing the amount of temperature rise mentioned in IPCC reports, that would require one kind of resources. If you think the temperature rise mentioned in IPCC is roughly correct for instance, but have a different understanding of how this would lead to civilisational collapse, this would a different kind of resources. Identifying cruxes will help. I'd also be keen to know what pathway to civilisational collapse you're thinking about in general.
1Noah Scales14d
EDIT: You know what, acylhalide, I got a little impatient in this reply. Sorry. Let me get to work, and do my best given your previous response. Thanks. :) -------------------------------------------------------------------------------- Hm, well, there's a range of temperature rise mentioned in IPCC reports. You're discussing it as if there's one. There was one goal temperature rise, a rise of less than 1.5C GAST this century, but it's not plausible now. So I guess explaining why that is so is useful to you. When you say a different understanding of civilizational collapse, different than whose? Some scientists who helped create the IPCC report are worried about civilizational collapse, for example, Peter Carter. Are you interested in his opinions and scenario discussions? And there's several other climate scientists with similar scenario discussions, for example, about the fall of tipping elements in the climate system within the next 30-50 years. EDIT: Many climate scientists are going out of their way to underscore the plausible consequences of temperature rises greater than 2.0C GAST. As far as what pathway I'm considering, I can explain that right now. A pathway where people deny the problem, assume that it is being fixed, or support solutions that were valid 20-30 years ago as still valid today. I'm not sure whether you consider anything outside of what is published as a consensus to be useful. I can argue the problem of civilizational collapse as either: * predictable according to plausible scenarios of concern to (a large subgroup of) climate scientists * predictable given contradictions in consensus reports such as the IPCC AR6 * predictable given consensus reports such as the IPCC AR6 The use of probabilities obscures the problem, by the way. What is your preference in that regard?
-7A.C.Skraeling17d
2Milan_Griffes17d
People other than Carrick decided to fund the campaign, which wouldn't have happened without funding.
3Habryka17d
Hmm, I don't know whether it wouldn't have happened without EA funding, but seems pretty plausible to me. I think campaign donations are public so maybe we can just see very precisely who made this decision. I also think on the funding dimension a bunch of EA leaders encouraged others to donate to the Carrick campaign in what seemed to me to be somewhat too aggressive. I do also think there was a separate pattern around the Carrick campaign where for a while people were really hesitant to say bad things about Carrick or politics-adjacent EA because it maybe would have hurt his election chances, and I think that was quite bad, and I pushed back a bunch of times on this, though the few times I did push back on it, it was quite well-received.
5Milan_Griffes17d
From this July 2022 FactCheck article [https://www.factcheck.org/2022/07/protect-our-future-pac/] (a [https://archive.ph/JMcle]): From a May 2022 NPR article [https://www.npr.org/2022/05/11/1097691538/bitter-feuds-and-crypto-ties-inside-one-of-the-most-expensive-democratic-primari] (a [https://archive.ph/43Kyt]):

(This is an annoyed post. Having re-read it, I think it's mostly not mean, but please downvote it if you think it is mean and I'll delete it.)

I have a pretty negative reaction to this post, and a number of similar others in this vein. Maybe I should write a longer post on this, but my general observation is that many people have suddenly started looking for the "adults in the room", mostly so that they can say "why didn't the adults prevent this bad thing from happening?", and that they have decided that "EA Leadership" are the adults. 

But I'm not sure "EA Leadership" is really a thing, since EA is a movement of all kinds of people doing all kinds of things, and so "EA Leadership" fails to identify specific people who actually have any responsibility towards you. The result is that these kinds of questions end up either being vague or suggesting some kind of mysterious shadowy council of "EA Leaders" who are secretly doing naughty things.

It gets worse! When people do look for an identifiable figure to blame, the only person who looks vaguely like a leader is Will, so they pick on him. But Will is not the CEO of EA! He's a philosopher who writes books about EA and has received ... (read more)

If I want EA to become less decentralized and have some sort of internal political system, what can I do?

I have 0 power or status or ability to influence people outside of persuasive argumentation. On the other hand, McCaskill and Co have a huge ability to do so. 

The idea that we can't blame the high-status people in this community because they aren't de jure leaders when it's incredibly likely they are the only people who could facilitate a system in which there are de jure leaders seems misguided. I'm not especially interested in assigning blame but when you ask the question who could make significant change to the culture or structure of EA I do think the answer falls on the thought leaders, even if they don't have official positions.

1Michael_PJ16d
I don't think de jure leaders for the movement as a whole are possible or desirable, to be clear. Our current model to my mind looks like a highly polycentric community with many robust local groups and organizations. Those organizations often have de jure leaders. But then in the wider community people are simply influential for informal reasons. I think that's fine (and indeed pretty decentralised!). I'm not sure what specific problems you have with it? Which of the recent problems stemmed from centralized decision-making rather than individuals or organizations making decentralized decisions that you just disagree with? I don't agree with this. IMO significant changes to culture or structure in communities rarely come from high-status people and usually come from lots of people in the community. You have the power of persuasive argumentation (which I also think is about as much power as most people have, and quite effective in EA): go forth and argue for what you want!
4Charlie_Guthmann15d
To be clear I wasn't necessarily advocating for political organization or centralization, but I disagree that the lack of centralization is an excuse for the thought leaders when they could create centralization If they wanted. It basically serves as a get-out-of-jail-free card for anything they do, since they have de facto control but can always lean back on not having official leadership positions. For the most part the other comments better explain what I meant.
6Michael_PJ14d
I think a significant point of disagreement here is to what degree we see some people as having de facto control or not. As you've probably realised, my view of the EA community is as broadly lacking in coordination or control, but with a few influential actors. Maybe I'm just wrong, though.
2Charlie_Guthmann13d
Yea I agree that is the main crux of our disagreement. I guess a lot of it comes down to what it means for someone to have (de facto) control. Ultimately we are just setting some arbitrary threshold for what control means. I don't think it matters that much to iron out if certain people have "control" or not, but it would probably be useful to think about it in more numerical terms in relation to some sort of median EA. Some metrics to use 1. Ability to set the internal discourse (e.g. karma/attention multiplier on forum posts compared to a baseline ea) 2. Ability to set external discourse (e.g. who is going on high viewership media stuff) 3. Control of the movement of money 4. Control of organizational direction for ea orgs
7Michael_PJ13d
I think this would be a huge improvement in the discourse. Focussing on specific activities or behaviours that we can agree on rather than vaguer terms like "control" would probably help a lot. Examples of arguments in that vein that I would probably like a lot more: * "CEA shouldn't have a comms arm" * "There should be more organizations running EA conferences" * "EA Forum moderators should have more power versus CEA and be user-appointed" * "People should not hold positions in more than one funding body" * etc.

I think that in a relevant sense, there is an EA Leadership, even if EA isn't an organisation. E.g. CEA/EV has been set up to have a central place in the community, and runs many coordinating functions, including the EA Forum, EA Global, the community health team, etc. Plus it publishes much of the key content. I think this comment overstates how decentralised the EA community is (for better or worse).

7Michael_PJ16d
I think a crucial difference is whether you perceive the activities as offering a service or as taking responsibility for the provision of that service. e.g. I view the CEA community health team as offering "hey, we'd like to help keep the community healthy". In that context it doesn't make that much sense to be annoyed that they haven't solved the problem of "people feeling uncomfortable posting on the forum" - they're out there trying to do some thing useful, they haven't promised to fix everything. As it happens, I don't think EA is that centralised. But perhaps that's a red herring and the real question is whether people think that some EA orgs or people have responsibility for certain community-wide things.

CEA/EV can prevent people from coming to the most important in-person meetups (EAG) and from participating in the most important EA online space (the EA Forum). In that sense, they're not just offering services, but have a lot of power. (That power also manifests itself in many other ways, including ways that are more directly relevant to the subject of the post.) And with that power comes responsibility.

3Michael_PJ16d
Yes, I agree that CEA has a responsibility to not abuse the social power that comes from controlling important spaces. I don't agree that they have a general responsibility for membership of the community or something.

I don't think it's mean, and I don't think you should delete it (and clearly many others think it's a good comment). However, I strongly disagree with the claim that EA leadership isn't really a thing. I'll also aim to explain why I think why asking questions directed at "EA leadership" is reasonable to me, even if they may not be to you.

But I'm not sure "EA Leadership" is really a thing

The coordination forum literally used to be called the "leaders forum". The description of the first coordination forum was literally "leaders and experienced staff from established EA organizations". The Centre for Effective Altruism organizes events called "Ëffective Altruism Global" and has the ability to prevent or very strongly recommend that organizers don't allow people into community events.
 

When people do look for an identifiable figure to blame, the only person who looks vaguely like a leader is Will, so they pick on him. But Will is not the CEO of EA! He's a philosopher who writes books about EA and has received a bunch of funding to do PR stuff.

If you have spent millions of dollars on a PR campaign for your book and are seen as the public face of EA, people who self-identify as EA a... (read more)

5Michael_PJ14d
Thanks for this excellent comment. I'm not going to respond more since I'm not sure what I think any more, but I just wanted to clarify one thing. I'm sorry about that! That wasn't my intention: I was trying to present the idea of the "adults" as hypothetical serious beings in comparison to whom we are like children. I don't mean to imply that the people doing work in EA are not serious or competent, but I do think it's wrong and unfair to think that they are at some ideal level of seriousness or competency (which few if any people can live up to, and shouldn't be expected to without consent and serious vetting).
1pseudonym2d
No need to apologize! Thought I'd share this in case it's a meaningful update https://forum.effectivealtruism.org/posts/oosCitFzBup2P3etg/insider-ea-content-in-gideon-lewis-kraus-s-recent-new-yorker [https://forum.effectivealtruism.org/posts/oosCitFzBup2P3etg/insider-ea-content-in-gideon-lewis-kraus-s-recent-new-yorker]

I think in some important cases there really are leaders, or at least people in positions of extreme responsibility, who could've done more. In terms of letting SBF stay in the EA community after the Alameda incident in 2018, that seems like it might've been a failure of information sharing (e.g.), if not an outright failure of e.g the Community Health team at CEA. If it was largely just a failure of information sharing, then that in turn could be a failure of EA culture (too much deference, worrying about prestige and PR, and Ra), for which thought leaders could be in part responsible. (To be clear, I'm not saying I would've done any better if I was in such a position of responsibility, or a thought leader. And maybe no one could reasonably have been expected to have done better, given all the tradeoffs involved.)

6Michael_PJ16d
Who are these people? What makes them so responsible? Did they agree to that or did we just kind of decide we want someone to be responsible and they're there? Have we considered that maybe nobody is responsible here? Is "not letting someone stay in the EA community" an action that people can take? The most serious such incidents that I know of a) came after multiple documented examples of serious wrongdoing, b) amounted to being banned from the EA Forum and EA conferences (i.e. venues controlled by a specific org, CEA) for a while. SBF didn't post on the EA forum or go to EA conferences. So what, specifically, do you think people should have done? Someone should have done something, is not IMO a helpful thing to say. I strongly endorse https://forum.effectivealtruism.org/posts/aHPhh6GjHtTBhe7cX/proposals-for-reform-should-come-with-detailed-stories

Who are these people? What makes them so responsible? Did they agree to that or did we just kind of decide we want someone to be responsible and they're there? Have we considered that maybe nobody is responsible here?

People in charge of granting $100Ms-$Bs of EA money. See my link to: Why didn't the FTX Foundation secure its bag?

SBF didn't post on the EA forum or go to EA conferences. So what, specifically, do you think people should have done?

Disowned him (publicly). Not laud him as a paragon of virtue in earning-to-give. Not invite him to speak at EA conferences. (As I say, I get that there might've been a failure of communication amongst people in the know, but it looks pretty bad that it was known to at least some influential people that Sam was not someone to be trusted.)

People in charge of granting $100Ms-$Bs of EA money.

Not laud him as a paragon of virtue in earning-to-give. Not invite him to speak at EA conferences.

The first group of people are not the people who took the latter group of actions.

I'm being picky here, but my point is that people are being very wooly about this idea of "EA Leadership". The FTX Foundation team  and the 80k team are different people, not arms of the amorphous "EA Leadership". So maybe the FTX Foundation team shouldn't have lauded SBF - but they didn't, that was someone else.

This is again where being specific matters. "The FTX Foundation team should have done more due diligence before agreeing to work with SBF" is at least a reasonable, specific, criticism that relates to the specific responsibilities those people might have. "Why did EA Leadership not Do Something?" is not.

3Greg_Colbourn14d
Yes, the (former) Future Fund team are specific people. Regarding the happenings in 2018 around Alameda, it's hard to know who the specific people are because we haven't heard much about who whew what. It seems reasonable to suppose that people at CEA (perhaps including the executives) knew about it (given SBF and Tara Mac Aulay both worked there prior to Alameda), but also possible that due to fear of reprisals or possible NDAs, no one in any position of responsibility knew about it.
0Guy Raveh13d
"EA leadership" is a set of very specific people - those who control the money, and those who control the brand. That means the boards of OpenPhil and EV, and the Future Fund team when that was still a thing. If CEA and 80k have their own boards (I think they don't?), then they too.

"The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?"

Thanks for the question Gideon, I'll just respond to this question directed at me personally.

When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.

I can't recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.

2Gideon Futerman9d
Thanks for this reply Rob, and I do think its pretty strange that no one in the know came forward to tell you or 80K even in a professional capacity, but that's not really your fault !

The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA.

Is this actually true right now? People donating to EA Funds seem like an example of deferring financial decisions, but I don't have data how EAs donate to the Funds vs. decide themselves where to donate. Or do you mean decisions like relying on GiveWell recommendations as an example of 'deferring financial power'?

I am also not sure how the EA Community compares to other movements. Is your claim that EA is worse at this than comparable movements or that we should hold ourselves to a higher standard?

I have mixed feelings about your post overall. If people defer decision-making power to "the leadership" then it's good to ask these questions. But mostly I see individuals making decisions for themselves. If others think the decisions are bad, they don't have to admire "the leadership" for it.

The vast bulk of funds in EA (OpenPhil and, until last week, FTX Future Fund) are controlled by very few people (financial). As is admission to EA Global (social). Intellectual direction is more open with e.g. the EA Forum, but things like big book projects and their promotion (The Precipice, WWOTF) are pretty centralised, as is media engagement in general.

The FTX Future Fund had a large regranter program. They didn't fully let regranters do whatever they wanted with funds, but I think it's incorrect to say that it's controlled by very few people.

Ultimately the Future Fund had veto power over regranters (even those with their own pots), [edit:] so I think it's inaccurate to say that the regranters had control of the funds (influence, sure; but not control).

I'm somewhat perturbed by the ratio of  karma on these comments (esp agree karma; although low sample size - mine has only 1 vote on agreement (5 votes on karma); see pic below for time of writing this comment)[1]. We've just found out that we've in general been way too trusting as a community, and could do with more oversight etc (although I guess it's open to discussion how much decentralisation of decision making is ideal; see below). The fact that regranters could influence the Future Fund on their grantmaking was great, but we shouldn't confuse that with actual control. What ultimately matters is what is true from a mechanistic legal perspective - where the buck actually stops, and who is actually in charge of authorising grants. For the Future Fund, that was 5 people (who presumably in turn could still have been vetoed by the 4 on the board). 

The next step for a regranting program in terms of actually distributing control would be to actually give the regranters the money, to do whatever they saw fit with it. I can imagine many people screaming in horror at the thought, especially those in central positions who think that they are the best experts on avoiding the un... (read more)

9Charlie_Guthmann17d
I was reached out to by a regranter and got the vibe immediately that they were stressed about providing grants that might be accepted and basically just optimizing for what they perceived to be the most likely things for the team to give the ok. Now again I only talked to one person but if they were just shooting ideas at the team to be processed similar to how they were processing general apps the regranter program serves more as a marketing tool to increase applicants and a slight filter of awful apps than it does change who has the power. I would be very interested to say the data on how many regrants were given / how many regrants were suggested compared to the normal funds.

I was a regranter. I did not have my own pot, but could make recommendations for grants. 52% of my regrants (11/21) were approved (32% by $ value). I understand that those with their own pots allocated to them had a lower bar for acceptance so probably had a better success rate for approvals.

I've been trying to find people willing and able to write quality books and have had a hard time finding anyone. "Doing Doing Good Better Better" seems one of the highest-EV projects, and EA Funds (during my tenure) received basically no book proposals, as far as I can remember. I'd love to help throw a lot of resources after an upcoming book project by someone competent who isn't established in the community yet.

4Jonas Vollmer15d
I wrote a quick shortform post. [https://forum.effectivealtruism.org/posts/EFpKcbaZNyZNk4qWD/?commentId=EqoriNov7opYxKWt6]
1Making this account""15d
Yes, this is brilliant.

Even the forum is organised so as to promote posts from people with large networks of high-upvoted people, which de facto means that core network of people pretty much get auto-highlighted for posting their shopping list.

5Charlie_Guthmann17d
Yea I'm not really sure why the default isn't democratic voting, with the option to toggle karma-weighted voting if you want.

A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group.

I'm really surprised anyone is even super confident that the Carrick Flynn campaign made major mistakes (or was a major mistake to attempt), much less that anyone thinks of the campaign as "a confidence-shattering failure" about EA as a whole. I feel like I must be missing something very basic that's in other people's models. Or maybe a lot of people were just very emotionally invested in that primary race?

There are probably things that could have been done better in the campaign, especially with the benefit of hindsight and experience. But getting members of a weird new niche academic philosophy elected to the US House of Representatives isn't the sort of thing I expect to have a >50% success rate, even if we try our hardest. And Flynn did pretty well in the polls, and would have won the primary if he'd peeled off ~5500 votes (9% of all votes cast) from Salinas.

That's a good enough showing that I expect there are a lot of nearby worlds where Flynn wins, and I'd happily give it another attempt if I could travel back in time, even ... (read more)

On Flynn Campaign: I don't know if it's "a catastrophe" but I think it is maybe an example of overconfidence and naivete. As someone who has worked on campaigns and follows politics, I thought the campaign had a pretty low chance of success because of the fundamentals (and asked about it at the time) and that other races would have been better to donate to (either state house races to build the bench or congressional candidates with better odds like Maxwell Frost, a local activist who ran for the open seat previously held by Val Demings, listed pandemic prevention as a priority, and won. Then again, Maxwell raised a ton of money, more than all the other candidates combined, so maybe he didn't need those funds as much as other candidates). Salinas was a popular, progressive, woman of color with local party support who already represented much of the district at the state level and helped draw the new one. So, it seemed pretty unlikely to me that she would lose to someone who had not lived in the state for years, did not have strong local connections, and had never run a campaign before, even with a massive money advantage. And from what I understand, the people in the district were ... (read more)

5RyanCarey15d
In the first example, you complain that EA neglected typical experts and "EA would have benefited from relying on more outside experts" but in the second example, you say that EA "prides itself on NOT just doing what everyone else does but using reason and evidence to be more effective", so should have realised the possible failure of FTX. These complaints seem exactly opposite to one another, so any actual errors made must be more subtle.
4Peter15d
Actually, they are the same type of error. EA prides itself on using evidence and reason rather than taking the assessments of others at face value. So the idea that others did not sufficiently rely on experts who could obtain better evidence and reasoning to vet FTX is less compelling to me as an after-the-fact explanation to justify EA as a whole not doing so. I think probably just no one really thought much about the possibility and looking for this kind of social proof helps us feel less bad.
3A.C.Skraeling15d
The campaign team flew EA community organisers from across the world to knock on doors, and ended up paying over a thousand dollars per vote. This happened in the USA, which has a political system tailored to facilitate the purchasing of elections. It was bad.
5RobBensinger15d
Would you consider $1000 per vote worthwhile if it resulted in Carrick winning? Also, if EA has a similar opportunity in the future, in an election with a similar number of voters (around 60,000), what's the maximum number of dollars spent per vote that you'd consider justifiable? Is that true? This is not my area of expertise, but my sense was that "buying elections" is often impossible or inordinately expensive, outside of races against nobodies with very little money. (I've heard ad-spending worked really well in competitive races in the recent midterm, but this is noteworthy exactly because it's somewhat unusual.)
4John G. Halstead13d
Isn't the point with the Carrick thing not only that it failed, but that we shouldn't have been doing that kind of thing? It seemed like a pretty big break from previous approaches which were to stay out of politics

Not saying I disagree with this, but it may be worth noting that "democracy" as an alternative didn't exactly do great either -- Stuart Buck wrote this comment, and it got downvoted enough that he deleted it.

Indeed. I actually am inclined to agree that more democracy in distributing funds and making community decisions is safer overall and prevents bad tail risks,  and I think Zoe Cremer's suggestions should be take seriously, but let's remember that democracy in recent years has given us Modi, Bolsonaro, Trump, Duterte and Berlusconi as leaders of countries with millions of citizens, on the basis of millions of votes, and that Hitler did pretty well in early 1930s German elections. Democracy is not just "not infallible" but has led to plausibly bad decisions about who should lead countries (as one example) on many occasions. (That might be a bit politicized for some people, but I feel personally confident all those leaders were knowably bad.) 

2Gideon Futerman17d
This post is merely asking questions of those currently in power, not saying any specific form of greater internal democracy is a good thing (I know you acknowledge that the post is doing this as well, but thought I would reiterate :-)!). Moreover, because of the karma system, the EA Forum is hardly democratic either!
6John_Maxwell17d
Fair enough! You're correct that the EA Forum isn't as democratic as "one person one vote". However, it is one of the more democratic institutions in EA, so provides evidence re: whether moving in a more democratic direction would've helped. I'd be interested if people can link any FTX criticism on reddit [https://www.reddit.com/r/EffectiveAltruism/]/Facebook prior to the recent crisis to see how that went. In any case, "one person one vote" is tricky for EA because it's unclear who counts as a "citizen". If we start deciding grant applications on the basis of reddit upvotes or Facebook likes, that creates a cash incentive for vote brigades.
3Charlie_Guthmann17d
you can see who likes things on Facebook, and reddit isn't especially used. You can actually see democratic voting on the tree of tags [https://forum.effectivealtruism.org/posts/ivGJPv6fKCLWGgcgy/new-tool-for-exploring-ea-forum-and-lesswrong-tree-of-tags] (weird that I can't find the same option for the forum itself...), but you still run into the issue that people might upvote/downvote posts that have more upvotes in general.
1Achim17d
I think most democratic systems don't work that way - it's not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.

I think we EAs need to increasingly prioritize speaking up about concerns like the ones Habryka mentioned.

Even when positive in-group feelings, the fear of ostracism, and uncertainty/risk aversion internally influences one to not bring up these concerns, we should fight back against this urge because the concerns, if true, will likely grow larger and larger until they blow up.

There is very high EV in course correction before the catastrophic failure point.

I have slightly edited the post, just to clarify some things I ought to have done. 

Not every question I pose is related to SBF etc., just questions I think the EA Leadership at large should answer. I am sure there are rational responses to many of these questions, and in the way that these are interpreted as an "attack" I do apologise; moreover, the "attack-lines" are also plausibly inconsistent, as some lines of attack likely point towards less centralisation and some to more. 

I’ll speak to question 6, since I am on the community health team, and in particular was hired in large part to work on community epistemics, but am only speaking to the work I’ve done rather than the whole team since I’m newish to the team. (Haven’t done tons of work on this yet, and my initial experiments and forays have been pretty varied, since the epistemics space is really large)

Tl;dr I think this matters, in and of itself it hasn’t been the top thing on my list, adjacent/related things have been high priority.

(Other CEA teams online (via the forum),... (read more)

In what sense does EA have something like a leadership?

There is no official overarching EA organisation. Strictly speaking, EA is just a collection of people who all individually does whatever they want. Some of these people have chosen to set up various orgs that does various things. 

But in a less formal but still very real way, EA is very hierarchical. There is a lot of concentration of power. 

  1. Some of this is based on status and trust. Some people and orgs have built up a reputation which grants them a lot of soft power within the EA network.&n
... (read more)

Thanks for this post - I think a lot of people have these questions and it's good to have common knowledge of that. I work on the community health team, and one of my areas is community epistemics so I have a lot of thoughts about question 6 and plan to come back to this when things are a little less frenetic.

FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not publicly disclosed? Why did they decide to not disclose such funding?

 

Did you receive the grant directly or as part of their regranting program?
 

4Gideon Futerman17d
I received the grant directly; they approached me directly (I never applied for it, nor ever applied for any EA Funding before they approached me). I have always been open about receiving their funding, because I think openness about funding sources and the degree of influence those funding sources have over a project is important. However, they decided to not publish this on the Future Fund website
2Chris Leong17d
Hmm… Interesting, are you sure you weren’t referred through a regranter?
1Gideon Futerman17d
That I have no idea; not based on the information I was given, but I don't know
1David Mears17d
The FTXFF site does publish (a subset of) its re-grants, as well as its grants.