Hide table of contents

Intro

I have had concerns about EA leadership communication as long as I've known about EA, and the FTX meltdown has persuaded me I should have taken them more seriously. 

(By leadership I mean well-known public-facing EAs and organizations.)

This post attempts to explain why I'm concerned by listing some of the experiences that have made me uncomfortable. tl;dr: EA leadership has a history of being selective in the information they share in a way that increases their appeal, and this raises doubts for me over what was and wasn't known about FTX.

I do not think I'm sharing any major transgressions here, and I suspect some readers will find all these points pretty minor. 

I'm sharing them anyway because I'm increasingly involved in EA (two EAGs and exploring funding for an EA non-profit) and I've lost confidence in leadership of the movement. A reflection on why I've lost confidence and what would help me regain it seems like useful feedback, and it may also resonate with others. 

i.e. This is intended to be a personal account of why I'm lacking confidence, not an argument for why you, the reader, should also lack confidence.

2014: 80K promotional event in Oxford.

I really wish I could find/remember more concrete information on this event, and if anyone recognizes what I'm talking about and has access to the original promotional material then please share it.

In 2014 I was an undergraduate at Oxford and had a vague awareness of EA and 80,000 hours as orgs that cared about highly data-driven charitable interventions. At the time this was not something that interested me, I was really focussed on art!

I saw a flyer for an event with a title something like 'How to be an artist and help improve the world!' I don't remember any mention of 80K or EA, and the impression it left on me was 'this is an event on how to be a less pretentious version of Bono from U2'. (I'm happy to walk all of this back if someone from 80K still has the flyer somewhere and can share it, but this is at least the impression it left on me.)

So I went to the event, and it was an 80K event with Ben Todd and Will MacAskill. The keynote speaker was an art dealer (I cannot remember his name) who talked about his own career, donating large chunks of his income, and encouraging others to do the same. He also did a stump speech for 80K and announced ~£180K of donations he was making to the org.

This was a great event with a great speaker! It was also not remotely the event I had signed up for. Talking to Ben after the event didn't help: his answers to my questions felt similar to the marketing for the event itself, i.e. say what you need to say to get me in the door. (Two rough questions I remember: Q: Is your approach utilitarian? A: It's utilitarian flavoured. Q: What would you say to someone who e.g. really cares about art and doesn't want to earn to give? A: Will is actually a great example of someone I think shouldn't earn to give (he intended to at the time) as we need him doing philosophical analysis of the best ways to donate instead.)

This all left me highly suspicious of EA, and as a result I didn't pay much attention to them after that for years. I started engaging again in 2017, and more deeply in 2021, when I figured everyone involved had been young, they had only been minorly dishonest (if I was even remembering things correctly), and I should just give them a pass. 

Philosophy, but also not Philosophy?: Underemphasizing risk on the 80K website

My undergraduate degree was in philosophy, and when I started thinking about EA involvement more seriously I took a look at global priorities research. It was one of five top-recommended career paths on 80K's website and required researchers in philosophy. 80K website at time of writing:

In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy

(https://80000hours.org/problem-profiles/global-priorities-research/)

This article contrasts sharply with the 80K page on philosophy:

the academic job market for philosophy is extremely challenging. Moreover, the career capital you acquire working toward a career in philosophy isn’t particularly transferable. For these reasons we currently believe that, for the large majority of people who are considering it, pursuing philosophy professionally is unlikely to be the best choice.

(https://80000hours.org/career-reviews/philosophy-academia/)

It seems like there are significant risks to pursuing further study in philosophy that 80K are well aware of, and it does not look great that they mention them in the context of general philosophical research (that they presumably don't care about their readers pursuing) but omit them when discussing a career path they are eager for their readers to pursue. Spending 7 years getting a philosophy PhD because you want to research global priorities and then failing to find a position (the overwhelmingly likely outcome) does not sound like much fun. 

This is a particularly clear example of a more general experience I've had with 80K material, namely being encouraged to make major life choices without an adequate treatment of the risks involved. I think readers deserve this information upfront.

Public Interviews (where is the AI?)

If you talk about EA's priorities in 2022 and fail to mention AI, I do not think you are giving an accurate representation of EA's priorities in 2022. But I've seen prominent TV and Radio interviews this year where AI isn't mentioned at all, I assume because interviewees are worried it won't appeal to viewers/listeners. 

Here is Ben Todd on a show titled 'What’s the best job to do good?' from the BBC: https://www.bbc.co.uk/programmes/m000ystj . (Will MacAskill was also on the Daily Show recently `https://www.youtube.com/watch?v=Lm3LjX3WhUI` though they since seem to have pulled the video, perhaps due to FTX). 

I think EA's answer to 'What’s the best job to do good?' is, all other things being equal, AI Safety and Biorisk work. But Ben barely mentions biorisk, and doesn't mention AI at all. I was really uncomfortable listening to this, and I think most listeners encountering EA for the first time could justifiably feel bait-and-switched if they took a look at 80K's website after listening to the show.

Recent Internet Takes

A lot of stuff has come out in the wake of FTX that wasn't publicly discussed in EA but seems like it should have been.

I was pretty alarmed by this thread from Kerry Vaughan which touches on Ben Delo, a major EA donor prior to SBF who pled guilty for "willfully failing to establish, implement, and maintain an anti-money laundering ('AML') program at BitMEX": https://twitter.com/KerryLVaughan/status/1591508697372663810. The implication here is that Ben Delo's involvement with EA just quietly stopped being talked about without any kind of public reflection on what could be done better moving forwards. 

I was also surprised to see that Will MacAskill is in a signal chat with Elon Musk, in which he previously tried to connect SBF with Elon to fund Elon's twitter acquisition https://twitter.com/MattBinder/status/1591091481309491200. Not only does this strike me as a strange potential use of significant financial resources, it raises questions about Will's unadvertised relationship with a controversial public figure, and one who founded a wildly successful AI Capabilities Research Lab. Furthermore, Will's tweet here about WWOTF implied to me that he didn't know Elon personally: https://twitter.com/willmacaskill/status/1554378994765574144. It turns out he did, the above text messages were sent several months prior to that tweet.

Edit: I think I messed the above paragraph up, I'm leaving it in so the comments make sense but thanks to Rob Bensinger for calling it out and see my subsequent comment here.

Finally, Rhodri Davies recently wrote a blogpost (this was actually prior to the FTX scandal) titled 'Why am I not an Effective Altruist?' including the text below:

And there are even whistle blowers accounts of the inner workings of the EA community, with rumours of secret Google docs and WhatsApp groups in which the leaders of the movement discuss how to position themselves and how to hide their more controversial views or make them seem palatable. I have no idea how much of this is true, and how much is overblown conspiracy theory, but it certainly doesn’t make the whole thing feel any less cult-like.

https://whyphilanthropymatters.com/article/why-am-i-not-an-effective-altruist/

I have zero evidence for or against this happening, but it unfortunately fits the pattern of my prior experience with EA leadership communications.

Conclusion

Nobody made a single false statement in any of the examples I've given above, but they are all cases in which I have felt personally misled by omission. These examples range from cases that could be honest mistakes (the 80K careers page example) to ones where the omissions seem pretty intentional (the 'art' event, Ben Delo).

My suggestion to any public-facing EAs: don't deliberately do this, and if you do this by mistake, take it seriously and course-correct. Failing to share information because you suspect it will make me less supportive or more critical of your views, decisions, or actions smells of overconfidence and makes you difficult to trust, and this has regularly happened to me in my engagement with EA. Otherwise, well, I'll probably still stick around because EA contains plenty of great people, but I'll have to be much more careful about who I collaborate with, and I won't be able to endorse or trust EA's public figures.

90

0
1

Reactions

0
1

More posts like this

Comments45
Sorted by Click to highlight new comments since:

I agree with the central thrust of this post, and I'm really grateful that you made it. This might be the single biggest thing I want to change about EA leaders' behavior. And relatedly, I think "be more candid, and less nervous about PR risks" is probably the biggest thing I want to change about rank-and-file EAs' behavior. Not because the risks are nonexistent, but because trying hard to avoid the risks via not-super-honest tactics tends to cause more harm than benefit. It's the wrong general policy and mindset.

Q: Is your approach utilitarian? A: It's utilitarian flavoured.

This seems like an unusually good answer to me! I'm impressed, and this updates me positively about Ben Todd's honesty and precision in answering questions like these.

I think a good description of EA is "the approach that behaves sort of like utilitarianism, when decisions are sufficiently high-stakes and there aren't ethical injunctions in play". I don't think utilitarianism is true, and it's obvious that many EAs aren't utilitarians, and obvious that utilitarianism isn't required for working on EA cause areas, or for being quantitative, systematic, and rigorous in your moral reasoning, etc. Yet it's remarkable how often our prescriptions look like the prescriptions of utilitarianism anyway.

I don't know of any better compact way of describing EA's moral perspective than "we endorse role-playing utilitarianism (at least when the stakes are high and there aren't relevant deontology-ish prohibitions)". And I think it's good and wholesome when EAs don't try to distance themselves from utilitarianism (given how useful it is as a way of summarizing a ton of different moral views we tend to endorse), but also don't oversimplify our relationship to utilitarianism.

it raises questions about Will's unadvertised relationship with a controversial public figure, and one who founded a wildly successful AI Capabilities Research Lab.

I agree that it was a terrible idea to found OpenAI, and reflects very poorly on Musk (especially given the stated reasoning at the time).

I think it's an awful idea to require every EA who's ever sent text messages to someone in Musk's reference class (or talked to him at a party, etc.) to publicly disclose the fact that they chatted. I don't see the point -- is the idea that talking to Elon Musk somehow taints you as a person?

Various MIRI staff have had conversations with Elon Musk in the past, and the idea that this fact is scandalous just sounds silly to me. I'd be more scandalized if EAs didn't talk to people like Musk, given the opportunity. (Or Bill Gates, or Demis Hassabis, or Barack Obama, etc.)

On some level I just think your whole framing here -- 'oh no, an EA talked to a Controversial Public Figure!' -- is misguided. "Controversial" is a statement about what's popular, not about what's true or good. I think that the impulse to avoid interacting in any way with people who seem Controversial is the same impulse that's behind the misguided behavior the rest of your post is talking about. It's the mindset of cancel culture, of guilt-by-association, of 'there's something unwholesome about talking to the Other Side at all, even to try to convince them to come around to doing the right thing'.

If we think that someone is doing the Wrong Thing, then by default we should talk to them and try to convince them to do things differently. EAs should primarily just be advocating for what they think is true and good, in a clear and honest voice, not playing the Six Degrees of PR Contagion game.

Furthermore, Will's tweet here about WWOTF implied to me that he didn't know Elon personally: https://twitter.com/willmacaskill/status/1554378994765574144.

Which part implied that to you? I don't see Will lying about this, and I don't see how it matters for the thread whether Will and Elon ever send each other text messages, or whether Will tries to get Elon's buy-in on a project.

AFAIK lots of EAs have tried to get Elon to help with (or ditch!) various projects over the years, though I'm unimpressed with the results.

I was pretty alarmed by this thread from Kerry Vaughan which touches on Ben Delo, a major EA donor prior to SBF with a fraud conviction: https://twitter.com/KerryLVaughan/status/1591508697372663810. The implication here is that Ben Delo's involvement with EA just quietly stopped being talked about without any kind of public reflection on what could be done better moving forwards. 

I'd still like more detail about what actually happened there, before I assume Kerry's account is correct. Various other recent Kerry-claims have turned out to be false or exaggerated, though I can't say I'm surprised if EAs responded super weirdly to the Ben Delo thing.

Furthermore, Will's tweet here about WWOTF implied to me that he didn't know Elon personally: https://twitter.com/willmacaskill/status/1554378994765574144

--

Which part implied that to you?

Presumably the part where Will says "So, er, it seems that Elon Musk just tweeted about What We Owe The Future. Crazy times!" I agree that this is not something you'd say about someone you knew to the extent that Will knew Elon to the extent shown by the Signal messages, unless you were trying to obscure this fact.  

Elon was keynote speaker at EA Global 2015, so would have known Will since then.

Ah, I guess you're saying that the "Crazy times!" part sounds starstruck and  has a vibe of "this just occurred out of the blue", and it would be weird to sound starstruck and astounded if Elon's someone you talk to all the time and are good friends with?

I agree that would be a bit weird, though the Will-text-messages I saw didn't cause me to think Will and Elon are that close, just that they've exchanged words and contact info at all. (Maybe I missed some text messages that do suggest a closer relationship?)

Upvoted. I think these are all fair points. 

I agree that 'utilitarian-flavoured' isn't an inherently bad answer from Ben. My internal reaction at the time, perhaps due to how the night had been marketed, was something like 'ah he doesn't want to scare me off if I'm a Kantian or something', and this probably wasn't a charitable interpretation.

On the Elon stuff, I agree that talking to Elon is not something that should require reporting. I think the shock for me was that I saw Will's tweet in August, which as wock agreed implied to me they didn't know each other, so when I saw the signal conversation I felt misled and started wondering how close they actually were. That said I had no idea Elon was an EAG keynote speaker, which is obviously public knowledge and makes the whole thing a lot less suspicious. I would also remove the word 'controversial' if I was to write this again, and that I think Elon's done harm re: AI, as I agree it's not relevant to the point I'm trying to make.

EAs and Musk have lots of connections/interactions -- e.g., Musk is thanked in the acknowledgments of Bostrom's 2014 book Superintelligence for providing feedback on the draft of the book. Musk attended FLI's Jan 2015 Puerto Rico conference. Tegmark apparently argues with Musk about AI a bunch at parties. Various Open Phil staff were on the board of OpenAI at the same time as Musk, before Musk's departure. Etc.

I am broadly glad more concerns are coming out and more information is being shared. But I think that the following attitude makes things worse rather than better:

with rumours of secret Google docs and WhatsApp groups in which the leaders of the movement discuss how to position themselves and how to hide their more controversial views or make them seem palatable

Every organization and movement has private docs and group chats. That's normal and good for the same reason it is normal and good for people to have private thoughts. Banning organizations from having private discussions makes everyone dumber and less competent.

This description could be anything from "muahaha, how do we trick the rubes into funding our math contest" to "what among our interests is most relevant to this group?" We might make guesses from their public actions, but we could make those anyway. The fact that an org doesn't publish every single document isn't nefarious.

I argue that EA orgs should self-censor / do PR spin less, but I agree that private docs and chats are good.

I want to argue my case to EA orgs and then have them feel free to debate the pros and cons of candor in private! I don't want them to feel pressured to hide or distort their thought process even in within-org conversations; it has to be OK to float impolitic ideas, so they can be genuinely evaluated. Heck, it has to be OK to defend both "be less honest" and "be more honest" positions, so that both positions can get a fair hearing.

Yeah that's a good distinction- even if a decision should very clearly be public, it doesn't automatically follow that the decision making process should be.

I agree that private docs and group chats are totally fine and normal. The bit that concerns me is 'discuss how to position themselves and how to hide their more controversial views or make them seem palatable', which seems a problematic thing for leaders to be doing in private. (Just to reiterate I have zero evidence for or against this happening though.)

I think it's good to discuss those topics internally at all, though I agree with you that EAs should generally stop hiding their controversial views (at least insofar as these are important for making decisions about EA-related topics), and I think we should be more cautious about optimizing for palatability (exactly because it can be hard to do this much without misleading people).

Hey, Arden from 80k here -

It'd take more looking into stuff/thinking to talk about the other points, but I wanted to comment on something quickly: thank you for pointing out that the philosophy phd career profile and the competitiveness of the field wasn’t sufficiently highlighted on the GPR problem profile . We’ve now added a note about it in the "How to enter" section.

I wrote the career review when I'd first started at 80k, and for me it was just an oversight not to link to it and its points more prominently on the GPR problem profile.

Nice! I should have mentioned somewhere: the 80K website is huge and has tons of articles on partly-overlapping topics, written over many years by a bunch of different people. If there's an inconsistency, my first guess would have been that one of the articles is out-of-date or they're just different perspectives at 80K that no one noticed need to be brought into contact to hash out who's right.

Thanks Arden! I should probably have said it explicitly in the post, but I have benefited a huge amount from the work you folks do, and although I obviously have criticisms, I think 80K's impact is highly net-positive.

That's kind of you to say : )

Respectfully, I have to disagree that most of these examples are any reason to distrust communications from EA. Someone has already addressed Ben Todd's answer on whether EA is utilitarian (saying it's utilitarian-ish is the most accurate answer, it's not deceptive), so I'll comment on the career advice you saw:

Philosophy, but also not Philosophy?

I took a look at global priorities research. It was one of five top-recommended career paths on 80K's website and required researchers in philosophy. 80K website at time of writing:

> In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy

This article contrasts sharply with the 80K page on philosophy:

> the academic job market for philosophy is extremely challenging. Moreover, the career capital you acquire working toward a career in philosophy isn’t particularly transferable. For these reasons we currently believe that, for the large majority of people who are considering it, pursuing philosophy professionally is unlikely to be the best choice.

It seems like there are significant risks to pursuing further study in philosophy that 80K are well aware of, and it does not look great that they mention them in the context of general philosophical research (that they presumably don't care about their readers pursuing) but omit them when discussing a career path they are eager for their readers to pursue. Spending 7 years getting a philosophy PhD because you want to research global priorities and then failing to find a position (the overwhelmingly likely outcome) does not sound like much fun.

This is a particularly clear example of a more general experience I've had with 80K material, namely being encouraged to make major life choices without an adequate treatment of the risks involved. I think readers deserve this information upfront.

They aren't being dishonest here, they're answering two different questions. The first page says that the best background for global priorities research, one of their most-recommended career options, is economics followed by philosophy. The second page, on philosophy as a career path, correctly points out that the job market for philosophy is very challenging. They're not telling lots of people they should go into philosophy in the hopes that some of them will then do global priorities research. They're saying you should not do philosophy, but if you did, then global priorities research is a highly valuable thing your background would be suitable for, which I'd say are good recommendations all around.

I think you're correct that they aren't being dishonest, but I disagree that the discrepancy is because 'they're answering two different questions'. 

If 80K's opinion is that a Philosophy PhD is probably a bad idea for most people, I would still expect that to show up in the Global Priorities information. For example, I don't see any reason they couldn't write something like this:

In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy ... but the academic job market for philosophy is extremely challenging, and the career capital you acquire working toward a career in philosophy isn’t particularly transferable. For these reasons, we strongly recommend approaching GPR via economics instead of philosophy unless you are a particularly gifted philosopher and comfortable with a high risk of failure...

Maybe I'm nitpicking, as you say it is mentioned on the 'philosophy academia' page. I was trying to draw attention to a general discomfort I have with the site that it seems to underemphasise risk of failure, but perhaps I need to find a better example!

Here are the less contentious parts, I hope?

"Ben Delo's involvement with EA just quietly stopped being talked about without any kind of public reflection on what could be done better moving forwards."

"Failing to share information because you suspect it will make me less supportive or more critical of your views, decisions, or actions smells of overconfidence and makes you difficult to trust, and this has regularly happened to me in my engagement with EA."

Yes, exactly. Thank you! EA Berkeley had to remove their leader just two years ago, for reasons that none of the membership there is willing to even mention - which makes it sound particularly bad, which means that 'the fact that EA is keeping that bad stuff hidden' is even worse.

Similarly, EA Berkeley members were targeted by a higher-up for blacklisting, and mentioned such in emails to me, only to go silent on the matter until I brought-up the blacklisting as an issue on their slack. At that point, they mentioned that "we've been in private talks with the Blacklister, asking them to stop their behavior" - nothing  public until absolutely necessary.

The EA houses in Berkeley, who are a magnet for EA Berkeley campus members to move-into (most residents are post-grads who were EA Berkeley prior to graduation and moving into the EA house), had repeatedly splurged unnecessarily, and when I pointed this out, the near-universal response on the EA Berkeley slack was 'well, that's them, not us. We're not responsible for anyone else in our org if they're committing petty fraud.' The slack poster Charles He even suggested that I be banned from their slack, for 'disrupting' things by bringing-up their bad behavior!

EA definitely has a brand they're protecting, and other posters seem to be bumping into other icky spots under the surface, too! (https://forum.effectivealtruism.org/posts/eoLwR3y2gcZ8wgECc/hubris-and-coldness-within-ea-my-experience) & "Power dynamics: What procedures exist for protecting parties in asymmetric power relationships? Are there adequate opportunities for anonymous complaints or concerns to be raised? How are high-status individuals held accountable in the event of wrongdoing?" from (https://forum.effectivealtruism.org/posts/sEpWkCvvJfoEbhnsd/the-ftx-crisis-highlights-a-deeper-cultural-problem-within)

Further: when I have posted new ideas on this forum, I was repeatedly strawmanned by EA members until other members eventually pointed-out that I was being strawmanned, and those who did so never admitted and apologized; they just downvoted every comment I made, as a team. EA protects the trolls who downvote-mafia and misrepresent, while looking for reasons to exclude 'non-aligned views'.

Strongly upvoted because I don't think this post deserves the downvotes.

A trick: You can say things like "I'm going to tell you an oversimplified story, but we can dig into the details later if you want" or even "Here's one perspective of what EA is which I hope will provide a good entry point. My perspective is different, but I can tell you about it later".

Really interesting post! Thanks for sharing, I was wondering what the university pitches were like. And I had no clue about Ben Delo. 

The BitMEX indictment is here: https://www.justice.gov/usao-sdny/pr/founders-and-executives-shore-cryptocurrency-derivatives-exchange-charged-violation#_ftn1

The government's indictment shouldn't be treated as the "truth," of course, but the facts are damning and extremely shady - these guys were claiming they were operating a foreign entity for non-US citizens (to trade crummy crypto derivatives), but they were actually doing it from an office in Manhattan and selling to people in the US and helping them conceal it. They also made sure that BitMEX skirted the rules to prevent money laundering, which is, of course, a huge portion of crypto transactions.

So yeah. Hard to argue it's just some minor compliance issue. At the sentencing the defense lawyers argued that it "wasn't as bad as Silk Road" - which, big whoop. The defendants pleaded guilty, the company was fined hundreds of millions of dollars, and Delo had to pay a $10 million fine, which is substantial. 

IS it the worst thing ever? No, but this guy clearly plays fast and loose with ethics and does a super shady kind of business.

I wish I lived in a society where this question was not necessary, but:  Was this a "victimless crime"; else, who were the victims and what did they lose?

I don't think itwas a victimless crime. I guess you may have different ideas about white collar crime and money laundering. But you'll have to read the indictment to form your own opinion. You can also read the civil suit filed by investors who allege they were screwed out of millions: https://t.co/SKI7JXPVFM 

When they tried get their money back, Delo taunted them with a meme about being incorporated in the Seychelles (despite, again, actually being based in the US doing business with US customers). Really does not seem like an upstanding guy.

But check them out for yourself.

If it's as the plaintiffs represent, I agree that's pretty damning.  Is it known, aside from the complaint itself, that the plaintiffs are telling the truth and the whole truth?  Don't suppose you have a link to the meme taunt?

Yes, it's in the complaint, here's a screenshot (where the CEO also talks about it being easier to bribe officials in the Seychelles with "a coconut": https://i.imgur.com/qd6X4LC.png

I don't know anything about the plaintiffs, but I assume BitMEX's lawyers certainly thought a jury would find them credible, and that's why they decided to settle for $44M.

Okay; I agree then that it's reasonable to say of Ben Delo that Hayes and cofounders were accused of trying to defraud two early investors, that Ben Delo is accused of taunting them with a meme, and that they settled out of court.

I do note that this is pretty different from what Vaughan was previously accusing Delo of, which sounded pretty plausibly like a "victimless crime".

Again, that's just one of the civil suits. They settled that civil suit, were found liable in another, and Delo also pleaded guilty to federal charges after BitMEX was used to launder stolen crypto.

Not good! And if that's the person you've been publicly celebrating for contributing to your org, it's not permissible to sweep it under the rug afterwards just because it's not as high profile as something like, say, SBF's fraud.

I don't, in fact, take federal charges like that seriously - I view it as a case of living in a world with bad laws and processes - but I do take seriously the notion of betraying an investor's investment and trust.

(the plaintiffs in that suit ended up getting $44 million, by the way)

Curated and popular this week
Relevant opportunities