S.E. Montgomery

Community Director @ Effective Altruism New Zealand
305Joined Mar 2022



I'm currently based in Wellington, New Zealand. I have a background in policy/politics, including working as a policy advisor and campaigning for 2 candidates in the 2015 Canadian election. I'm primarily interested in the intersection of longtermism, policy/politics, and community building. 

How others can help me

I am open to opportunities in the above spaces, and am always keen to hear from community builders, particularly those located in Australasia. 

How I can help others

Reach out to me if you have questions about EA community building. 


My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out.  I don't imagine Will felt it any of his business to advise SBF as to whether or not this was a good move.  And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.

I agree that it's possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka's comment which said: 

I think it could be a cost-effective use of $3-10 billion (I don't know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it's not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam's net-worth at face-value at the time, this didn't seem like a crazy idea to me. 

If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that's the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn't feel comfortable with doing either of those. 

I can see where you're coming from with this, and I think purely financially you're right, it doesn't make sense to think of it as billions of dollars 'down the drain.' 

However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:

  • Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal) 
  • Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare - this is quite a shift from those into a space that is more about power and politics, particularly given Musk's stated political views/aims leading up to this purchase)
  • How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.)
  • What were SBF's personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA's vision for the platform? 
  • What trade offs would be made that would impact other cause areas? 

My guess is there must be some public stuff about this, though it wouldn't surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something 'makes sense' after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell's last dollar, and I haven't written up anything close to a quarter of them, and likely never will).

Yeah, there could be some public stuff about this and I'm just not aware of it. And sorry, I wasn't trying to say that people are only allowed to say that something 'makes sense' after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it "makes sense." I'm not saying that I don't see potential value in purchasing Twitter, but I don't think a huge decision like that should be justified based on quick, post-hoc  judgements. If SBF wanted to buy Twitter for non-EA reasons, that's one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We're a movement that prides itself on using evidence and reason to make the world better, after all.)

Oh, to be clear, I think Will fucked up pretty badly here. I just don't think any policy that tries to prevent  even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place). 

Thanks for clarifying that - that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important. 

I think it could be a cost-effective use of $3-10 billion (I don't know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it's not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam's net-worth at face-value at the time, this didn't seem like a crazy idea to me. 

The 15 billion figure comes from Will's text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then  Elon Musk asks, "Does he have huge amounts of money?" and Will replies, "Depends on how you define "huge." He's worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: "~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing"

It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I've never seen 'buying up social media companies' as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say "this makes sense" without, to my knowledge, ever having discussed the merits of it. 

 I don't know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it's likely Will did not consult anyone else, as like, it's his right as a private individual talking to other private individuals. 

I don't agree with this framing. This wasn't just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it's his right to have private conversations with others and he doesn't have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done.  I'm not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here. 

In terms of people coming away from the post thinking that polyamory = bad, I guess I have faith in people's ability on this forum to separate a bad experience with a community from an entire community as a whole. (Maybe not everyone holds this same faith.)

The post was written by one person, and it was their experience, but I expect by now most EAs have run into polyamorous people in their lives (especially considering that EAs on average tend to be young, male, non-religious, privileged, and more likely to attend elite universities where polyamory/discussions about polyamory might be more common) and those experiences speak for themselves. For example, I personally have met lots of polyamorous people in my life, and I've seen everything from perfectly healthy, well-functioning relationships to completely toxic relationships (just like monogamous relationships). So when I engaged with the post, I was thinking, "this person had a bad experience with the poly community, and it sounds terrible. I know from my own experiences that polyamory relationships can be healthy, but unfortunately that's not what this person experienced."

I'm persuaded by your analogy to race, and overall I don't want the EA community to perpetuate harmful stereotypes about any group, including polyamorous people. I think my main conflict here is I also want a world where women feel okay talking about their experiences without holding the added worry that they might not word things in exactly the right way, or that some people might push back against them when they open up (and I think you would probably agree with this). 

I'm conflicted here. I completely agree with you that shitting on others' morally-neutral choices is not ideal, but I don't think anyone was coming away from reading that post thinking that polyamory = bad. I would hope that the people on this forum can engage thoughtfully with the post and decide for themselves what they agree/disagree with. 

If someone had a bad experience with a man, and in the process of talking about it said something like, "all men suck and are immoral," I just don't think that is the right time or place to get into an argument with them about how they are wrong. It may have not even been coming from a place of "I actually 100% believe this," it may have just been something thought/written about in the heat of the moment when they are recounting their negative experiences. Again, there's no "perfect victim" that is going to say things in a way you 100% agree with all the time, but IMO the forum to disagree with them does not need to be while they are recounting their negative experience. 

Great post! I agree with a commenter above who says that "The problem is not a lack of ideas that needs to be rectified by brainstorming - we have the information already. The problem seems to be that no one wants to act on this information." That being said, I have a few thoughts: 

Regarding code of conduct at events, I'm hesitant to make hard and fast rules here. I think the reality around situations such as asking people out/hitting on people, etc, is that some people are better at reading situations than others. For example, I know couples who have started dating after meeting each others at  my local EA group's events, and I don't think anyone would see an issue with that. The issue comes in when someone asks someone out/hits on someone and makes the other person uncomfortable in the process. That being said, not asking people out during 1:1s seems like a good norm (I'm surprised I even need to say this, to be frank), as does not touching someone unless you have explicitly asked for their consent to do so (this can apply even to something like hugs), and not making comments on someone's appearance/facial features/body. 

In terms of power structures/conflicts of interest, I would love to see us borrow more from other organisations that have good guidelines around this. I can't think of any specific ones right now, but I know from my time working in government that there are specific processes to be followed around conflicts of interest, including consensual workplace relationships. I'm sure others can chime in with organisations that do this well. 

In terms of hiring, I like what Rethink Priorities is doing. They attempt to anonymise parts of applications where possible, and ask people not to submit photos alongside their CVs. I think more could be done to encourage partially blind hiring/funding processes. For example, an employer/funder could write their first impression of someone's application without seeing any identifying information (eg. name, age, gender, ethnicity, etc), then do a second impression after. I'm conscious that names are quite important in EA and that this could add more work to already busy grant-making organisations, but maybe there is a way to do this that would minimise additional work while also helping reduce unconscious bias. 

I would also love to see more writing/information/opinions come from the top-down. For example, people who have a big voice in effective altruism could write about this more often and make suggestions for what organisations and local groups can do. We already see this a bit from CEA, but it would be great to see it from other EA orgs and thought leaders. Sometimes I get a sense that people who are higher-up in the movement don't care about this that much, and I would love to be proven wrong.

Lastly, when it comes to engaging with posts on the forum about this topic, I was disappointed to recently see a post of someone writing about their experiences in the EA NYC community be met with a lot of people who commented disagreeing with how the post was written/how polyamorous men were generally characterised in the post. I think we should establish a norm around validating people when they have bad experiences, pointing them to the community health team, and taking steps to do better. There is no "perfect victim" - we need to acknowledge that sometimes people will have bad experiences with the community and will also hold opinions we disagree with. When they bring up their bad experience, it's not the time to say, "not all men are like this" or "I disagree with how you went about bringing this up." 

Strong upvote. It's definitely more than "just putting two people in touch." Will and SBF have known each other for 9 years, and Will has been quite instrumental in SBF's career trajectory - first introducing him to the principles of effective altruism, then motivating SBF to 'earn to give.' I imagine many of their conversations have centred around making effective career/donation/spending decisions. 

It seems likely that SBF talked to Will about his intention to buy Twitter/get involved in the Twitter deal, at the very least asking Will to make the introduction between him (SBF) and Elon Musk. At the time, it seemed like SBF wanted to give most of his money to effective charities/longtermist causes, so it could be argued that, by using up to 15 billion to buy Twitter, that money would be money that otherwise would have gone to effective charities/longtermist causes. Given the controversy surrounding the Twitter deal, Elon Musk, and the intersection with politics, it also strikes me as a pretty big decision for SBF to be involved with. Musk had publicly talked about, among other things, letting Donald Trump back on Twitter and being a 'free speech absolutist.' These are values that I, as a self-identified EA, don't share, and I would be extremely concerned if (in a world where the FTX scandal didn't happen), the biggest funder to the EA movement had become involved in the shitshow that has been Twitter since Musk acquired it. (It seems like the only reason this didn't happen was because SBF set off Elon Musk's "bullshit meter," but I digress.) 

It's hard to say how big of a role Will played here - it's possible that SBF had narrowed in on buying Twitter and couldn't be convinced to spend the money on other things (eg. effective charities), or that Will thought buying Twitter was actually a good use of money and was thus happy to make the introduction, or maybe that he didn't view his role here as a big deal (maybe SBF could have asked someone else to introduce him to Musk if Will declined). Will hasn't commented on this so we don't know. The only reason the text messages between Will and Elon Musk became public was because Twitter filed a lawsuit against Musk

 As the commenter above said, I would consider disavowing the community if leaders start to get involved in big, potentially world-changing decisions/incredibly controversial projects with little consultation with the wider community. 

I'm not sure I agree with this. I agree that compassion is a good default, but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis, which will include many people in the 'Dank EA Memes' Facebook group. Humour can be a coping mechanism which will make some people feel better about bad situations:

"As predicted, individuals with a high sense of humor cognitively appraised less stress in the previous month than individuals with a low sense of humor and reported less current anxiety despite experiencing a similar number of everyday problems in the previous two months as those with a low sense of humor. These results support the view that humor positively affects the appraisal of stressful events and attenuates the negative affective response, and related to humor producing a cognitive affective shift and reduction in physiological arousal (Kuiper et al. 1993; Kuiper et al. 1995; Martin et al. 1993).

Maybe there is a way to use humour in a way that feels kinder, but I've personally yet to see anything since the FTX crisis started that could be defined as "compassionate" but also that made me laugh as much as those memes did.

Thanks for your response. On reflection, I don't think I said what I was trying to say very well in the paragraph you quoted, and I agree with what you've said.

 My intent was not to suggest that Will or other FTX future fund advisors were directly involved (or that it's reasonable to think so), but rather that there may have been things the advisors chose to ignore, such as Kerry's mention of Sam's unethical behaviour in the past. Thus, we might think that either Sam was incredibly charismatic and good at hiding things, or we might think there actually were some warning signs and those involved with him showed poor judgement of his character (or maybe some mix of both). 

Load More