Several media outlets reported recently that Will MacAskill was a liaison between Sam Bankman-Fried and Elon Musk, trying to set them up to discuss a joint Twitter deal. The story gained traction after Musk's texts were released as a part of court proceedings related to the said deal. The texts can be viewed here, the discussion between Will and Elon carries on across a few pages, starting on page 87.

In no particular order, after a quick Google search, the outlets that ran the story are:

Some users already provided additional sources and raised good concerns about Will's involvement in comments here and here. I think that in light of what is recently going on with SBF, FTX, Elon Musk and Twitter this involvement warrants a response from Will (different than the general response to the FTX debacle) - a response both to the community and to media at large.  The story is already spreading across media big and small, serious and tabloid and a reaction is very much needed.
 

Comments24


Sorted by Click to highlight new comments since:

I'm not sure I understand why you think this requires a response. I don't think the texts here were shady or wrong at all. Musk was clearly looking for people to buy Twitter with him, and Will happened to be a mutual contact. Trying to put them in touch seems pretty reasonable to me.

This may not be what you're intending (and is pretty understandable), but I want to be pretty careful about generalising from what's going on now, to assuming that anything involved with SBF or FTX is shady until proven otherwise.

I think seeing it as "just putting two people in touch" is narrow. It's about judgement on whether to get involved in highly controversial commercial deal which was expected to significantly influence discourse norms, and therefore polarisation, in years to come. As far as I can tell, EA overall and Will specifically do not have skills / knowhow in this domain.

Introducing Elon to Sam is not just like making a casual introduction; if everything SBF was doing was based on EA, then this feels like EA wading in on the future of Twitter via the influence of SBFs money.

Introducing Elon to Holden because he wanted to learn more about charity evaluation? Absolutely - that's EA's bread and butter and where we have skills and credibility. But on this commercial deal and subsequent running of Twitter? Not within anyone's toolbox from what I can tell.

I'd like to know the thinking behind this move by Will and anyone else involved. For my part, I think this was unwise, should have had more consultation around it.

I would consider disavowing the community if people start to get more involved in: 1) big potentially world-changing decisions which - to me - it looks like they don't have the wider knowledge or skillset to take on well, or 2) incredibly controversial projects like the Twitter acquisition, and doing so through covert back-channels with limited consultation.

Strong upvote. It's definitely more than "just putting two people in touch." Will and SBF have known each other for 9 years, and Will has been quite instrumental in SBF's career trajectory - first introducing him to the principles of effective altruism, then motivating SBF to 'earn to give.' I imagine many of their conversations have centred around making effective career/donation/spending decisions. 

It seems likely that SBF talked to Will about his intention to buy Twitter/get involved in the Twitter deal, at the very least asking Will to make the introduction between him (SBF) and Elon Musk. At the time, it seemed like SBF wanted to give most of his money to effective charities/longtermist causes, so it could be argued that, by using up to 15 billion to buy Twitter, that money would be money that otherwise would have gone to effective charities/longtermist causes. Given the controversy surrounding the Twitter deal, Elon Musk, and the intersection with politics, it also strikes me as a pretty big decision for SBF to be involved with. Musk had publicly talked about, among other things, letting Donald Trump back on Twitter and being a 'free speech absolutist.' These are values that I, as a self-identified EA, don't share, and I would be extremely concerned if (in a world where the FTX scandal didn't happen), the biggest funder to the EA movement had become involved in the shitshow that has been Twitter since Musk acquired it. (It seems like the only reason this didn't happen was because SBF set off Elon Musk's "bullshit meter," but I digress.) 

It's hard to say how big of a role Will played here - it's possible that SBF had narrowed in on buying Twitter and couldn't be convinced to spend the money on other things (eg. effective charities), or that Will thought buying Twitter was actually a good use of money and was thus happy to make the introduction, or maybe that he didn't view his role here as a big deal (maybe SBF could have asked someone else to introduce him to Musk if Will declined). Will hasn't commented on this so we don't know. The only reason the text messages between Will and Elon Musk became public was because Twitter filed a lawsuit against Musk

 As the commenter above said, I would consider disavowing the community if leaders start to get involved in big, potentially world-changing decisions/incredibly controversial projects with little consultation with the wider community. 

Attempting to frame this as MacAskill simply putting them in touch is fairly astonishing. From the texts alone:

MacAskill laid out monetary amounts for the offers of Bankman-Fried's investment in Twitter.

He’s worth $24B, and his early employees (with shared values) bump that to $30B. I asked about how much he could in principle contribute and he said: “~$1-3b would be easy ~3-8b I could do ~$8-15b is maybe possible but would require financing.

Then, when Musk explicitly asked if he would vouch for Sam Bankman-Fried. MacAskill vouched for him enthusiastically, linking explicitly to SBF's dedication to longtermism.

You vouch for him?

Very much so! Very dedicated to making the long-term future of humanity go well.

I do not understand how this can be characterised as just putting them in touch.

I do not understand how this can be characterised as just putting them in touch.

You're definitely right that it can't. I imagine that many people (myself included) didn't see these particular parts of the text message conversations. 

FWIW, I think the talk about money was presumably just Will taking Sam's word for it when he shouldn't have, so there's a sense in which the messages you're quoting illustrate one mistake and not two.

That said, it's a big one, if you vouch someone and then this happens (what we've seen over the past couple of weeks).

Fwiw I would still be interested in a response from Will, but I dont think he should feel obliged to give one.

Hi, thank you for bringing this up, I didn't want to generalise that everything with FTX is shady, however, It appears that media and people outside of EA might get that impression anyway based on media featuring Will's name rather prominently next to SBF's etc. General public will not know the minute details and there's a risk of association rising, that EA's are a part of (crypto) billionaires club, which has a lot of bad publicity already. Leaving this uncommented exposes EA to further allegiations of being a Silicon Valley cult etc. etc.

Also, I agree with the questions raised in the comments I bring up and link in my post: perhaps dealing with Elon Musk and SBF should have been handled other way, scrutinized more, maybe the rationale behind it should have been made public etc.

Moreover, Musk's Twitter buyout and his actions at Twitter also received a lot of criticism - so there's this side of the story as well. Perhaps this is even more damning: it is reasonable to assume that Will didn't know about inner workings of FTX, however there were publicly known red flags about Musk's intentions with Twitter (loosening moderation etc.) and trying to help him with that is a bad look.
 

What I think was shady here:

  • Why would Will want SBF to buy Twitter, and think it worth billions? Apart from thinking it was a great business investment, a strong contender for the reason is ‘propaganda for our values’. That’s not very integrity-like. (If anyone can fill in the gaps there, please do.) It's hard to read the proposal as only being motivated by investing, because Will says in his opening DM: "Sam Bankman-Fried has for a while been potentially interested in purchasing it and then making it better for the world"
  • It’s an example of how EA was too trusting of SBF
  • Seems like poor judgement given the price tag
  • A general sense that I would be ashamed for this to leak if I were Will (I had this sense before recent revelations about SBF).[1]

So I would very much appreciate an explanation by Will of what his motive was here, and who he consulted on this monumental decision. If nothing else, it would model transparency and accountability.


    1. I should have been more public about my feelings at the time, but didn’t out of I guess cowardice and not wanting to tarnish EA rep — which is a dishonourable impulse ↩︎

Feels a bit weird to me that you are speaking about "EA" doing something here, as it seems pretty clear that this was Will acting in a personal capacity.

(This is in no way trying to defend his actions, but I think it's an important difference. )

Edit: This comment refered to an earlier version of David's comment that talked about EA wanting to buy twitter, etc.

  I’ve edited the comment now. I agree that Will's actions are not EA's actions, and I phrased it weirdly.

I was assuming that any reason Will might have wanted SBF to buy Twitter would be justified in terms of benefit to EA.

So to be clearer, the question to Will would be "Why would it be in the interest of EA for you to facilitate someone close to EA to buy Twitter?”

I think it'd be easy to come up with highly impactfwl things to do with free reign over Twitter? Like, even before I've thought about it, there should be a high prior on usefwl patterns. Brainstorming:

  1. Experiment with giving users control over recommender algorithms, and/or designing them to be in the long-term interests of the users themselves (because you're ok with foregoing some profit in order to not aggressively hijacking people's attention)
    1. Optimising the algorithms for showing users what they reflectively prefer (eg. what do I want to want to see on my Twitter feed?)[1]
    2. Optimising algorithms for making people kinder (eg. downweighting views that come from bandwagony effects and toxoplasma), but still allowing users to opt-out or opt-in, and clearly guiding them on how to do so.
  2. Trust networks
    1. Liquid democracy-like transitive trust systems (eg. here, here)
      1. I can see several potential benefits to this, but most of the considerations are unknown to me, which just means that there could still be massive value that I haven't seen yet.
      2. This could be used to overcome Vingean deference limits and allow for hiring more competent people more reliably than academic credentials (I realise I'm not explaining this, I'm just pointing to the existence of ideas enabled with Twitter)
      3. This could also be a way to "vote" for political candidates or decision-makers in general too, or be used as a trust metric to find out whether you want to vote for particular candidates in the first place.
  3. Platform to arrange vote swapping and similar, allow for better compromises and reduce hostile zero-sum voting tendencies.
  4. Platform for highly visible public assurance contracts (eg. here), could be potentially be great for cooperation between powerfwl actors or large groups of people.
    1. This also enables more visibility for views that held back by pluralistic ignorance. This could be both good and bad, depending on the view (eg. both "it's ok to be gay" and "it's not ok to be gay" can be held back by pluralistic ignorance).
  5. Could also be used to coordinate actions in a crisis
    1. eg. the next pandemic is about to hit, and it's a thousand times more dangerous than covid, and no one realises because it's still early on the exponential curve. Now you utilise your power to influence people to take it seriously. You stop caring about whether this will be called "propaganda" because what matters isn't how nice you'll look to the newspapers, what matters is saving people's lives.
  6. Something-something nudging idk.

Mostly, even if I thought Sam was in the wrong for considering a deal with Elon, I find it strange to cast a negative light on Will for putting them in touch. That seems awfwly transitive. I think judgments for transitive associations are dangerous, especially given incomplete information. Sam/Will probably thought much longer on this than I have, so I don't think I can justifiably fault their judgment even if I had no ideas on how to use twitter myself.

  1. ^

    This idea was originally from a post by Paul Christiano some years ago where he urged FB to adopt an algorithm like this, but I can't seem to find it rn.

Very good comment. I now think that buying Twitter could make sense. (Partly also because I realised that if Twitter is an investment that makes you money, any impact on top is kind of costless. It's not the case that either the motivation was 'buy Twitter to make it better by our lights' or 'make more money'.)

I still agree with judgement for helping someone do something you think might be bad, or, as you call it, transitive associations (see my comment here for more detail).

I also still think it would be a good move for Will to explain himself what was going on, for the sake of modelling transparency.

+1 - when I saw those messages, I felt uncomfortable about it, because I couldn't imagine any good reason for EAs to want to buy Twitter, and because it hadn't been discussed publicly at all, that I saw. So if SBF had bought Twitter - with or without Will's advice/support/backing - with the money that he said he was planning to use for EA causes, that would have seemed inappropriately unilateral to me (even if the recent fraud stuff hadn't happened.) 

Why would Will want SBF to buy Twitter, and think it worth billions? Apart from thinking it was a great business investment, a strong contender for the reason is ‘propaganda for our values’.

I don't consider this a plausible motivation at all. Even assuming Will's judgment wasn't great here, he's clearly smart enough to know that it would make an incredibly bad impression to run a site this large, associated with free speech (or lack thereof), and turn it into something that's ideologically biased. You'd have to be incredibly naive to think that this has a shot at going well (and that's almost the bigger issue than it being "not very high-integrity like"). In any case, I think it's much more likely Will wanted to raise the world's sanity waterline by improving public discourse norms. (Emrik made a comment with some example ideas.) (Edit: Just saw you already replied to Emrik and changed your mind somewhat, very cool!)

Another great point, transparency - I want Wil to be transparent as to his involvement in that given his reach and influence on EA, he should be able to explain his thought process as to what was the objective of his endorsement.

More and more media outlets are reporting [...]

I think the use of present tense here is a bit misleading, since almost all of these articles are from 5 or 6 weeks ago. 

Thank you, point taken, corrected it. I made a mental map that this is an ongoing process reaching from as far back as couple of weeks ago until now. But it can be seen other way.

However, there is a question on why this wasn't discussed in the forums earlier, as the first reporting came in. Not a good sign if such stories on Will are in the media and nobody in EA noticed (myself included).

You may want to update your post including a link to the court document with the conversations. 

The document is linked only in a couple of the articles published in the news, but it is much easier to find if you include it in the post.

Will McAskill conversation starts on page 87. This was a very fast review from my end, I may have missed something else before page 87, if you want to be completely sure please have a second look.

https://assets.bwbx.io/documents/users/iqjWHBFdfxIU/ro.xehDmXvHk/v0

 

Thank you, updated.

Here's an early selection of some of the texts. Sorry for the poor formatting. I am struggling to understand why MacAskill was acting like this:

https://i.imgur.com/xUkkUxE.png
https://i.imgur.com/2YrjPpB.png
https://i.imgur.com/re1ivbJ.png

Wow, thanks for posting these!

To me, it doesn't seem that hard to understand how someone could act like that. Seems to be a common mistake, trusting someone (Sam) too much and thinking he's a great entrepreneur (therefore connect him) and vouch for him. I'm not saying that it's excusable, just that it's a common type of mistake for someone with bad people judgment.

howdoyousay? made a good comment, admittedly:

As far as I can tell, EA overall and Will specifically do not have skills / knowhow in this domain.

And neither did Sam. But, if you think Sam's a great entrepreneur, maybe it doesn't take all that much knowhow to think it's worth a shot? I mean Musk clearly hadn't thought about this idea too much either, and you only have to improve the counterfactual where Musk does things alone! 

I don't dislike the idea of reforming twitter. Imagine if Sam was who Will thought he was and actually had 8 billion liquid that were his to spend (and not pledged to other EA causes or to anyone else). Then why not let him try it? (No need to make it EA-branded and no need for Will's involvement – and therefore EA involvement – in the initial connection to become a topic in the media.) 

I'm not saying Will did nothing wrong. I think the mistake of vouching is inexcusable (despite being understandable) and if I was Will I'd strongly consider forever forfeitting my vote on anything people-judgment-related.  I'm just saying the twitter thing per se doesn't seem obviously insane to me. Public discourse is broken in a way that makes me very pessimistic about the future in general, so thinking "maybe it would be better if Elon Musk wasn't in charge of improving it alone, maybe it would be good if someone really thoughtful with an EA outlook played a role in this" seems totally reasonable.

Miguel
-24
5
24

This issue can either make or break the EA movement...

I hope Wil will be able to justify his involvement in that exchange of messages with Elon..

[anonymous]9
6
1

Hi Miguel, I broadly agree with AbigailT's comment. Could you expand why you think this is a bad thing? From my perspective, it just seems like he put 2 people in touch, and that Elon's purchase of Twitter was not immoral. 

By the way, sorry you got downvoted so much. I suspect it's because your comment was (1) strong/accusatory, (2) didn't have justification, (3) and not obviously true. 

Thank you Pranay for checking on me and why my message was framed as requiring justification from Wil.

My understanding is if you are to vouch for someone over some deal, my best estimation is at the very least one is knowledgeable of the person's character - So the understanding I took from reading the exchange of messages was Wil is certifying for SBF to a degree and he knew him well enough over that sum of money which is not at all small. Well there are also blindspots in people's characters - narcissistic personality disorders  or psychopathy that can literally hide their true identity through fictitious masks people wear which might also be the case given that fraud was committed by SBF.

I see Wil as the largest person in spreading what the EA community is and the reason why I'm here in the first place - I hope he can share his opinion on the matter as his actions can really impact the movement at scale.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f