Hide table of contents

EDIT 17 Nov 2022: Retracted due to someone reminding me that both is not merely an option, but one with at least some precedent. Oops.

The following is just here for historical purposes now:


Context: In a recent interview with Kelsey Piper, Sam Bankman-Fried was asked if his "ethics stuff" was a front for something else:

[Kelsey:] you were really good at talking about ethics, for someone who kind of saw it all as a game with winners and losers

[SBF:] ya ... I had to be it's what reputations are made of, to some extent I feel bad for those who get fucked by it by this dumb game we woke westerners play where we say all the right shiboleths and so everyone likes us

One comment by Eli Barrish asked the question I'm now re-asking, to open a discussion:

The "ethics is a front" stuff: is SBF saying naive utilitarianism is true and his past messaging amounted to a noble lie? Or is he saying ethics in general (including his involvement in EA) was a front to "win" and make money? Sorry if this is super obvious, I just see people commenting with both interpretations. To me it seems like he's saying Option A (noble lie).

Let me be clear: this is an unusually important question that we should very much try to get an accurate, precise answer to.

EA as a movement is soul-searching right now, and we're trying to figure out how to prevent this, or something similar-but-worse, from happening again. We need to make changes, but which changes are still unknown.

To determine which changes to make, we need to figure out if this situation was: A. "naive utilitarian went too far", or B. "sociopath using EA to reputation-launder".

Both are extremely bad. But they require different corrections, lest we correct the wrong things (and/or neglect to correct the right things).

Note: I'm not using "sociopath" in the clinical sense, at least not checking for that usage, but more as the colloquial term for "someone who is chronically incapable of empathy / caring about others at the level of 'feeling sad when they feel sad'".

New Answer
New Comment


4 Answers sorted by

I want to push back against the question itself - I think it might be a false dichotomy. I understand we like to put people into boxes, but it is likely things are more complex than that. For example, being a naive utilitarian and being a sociopath is not mutually exclusive, or he could be neither. I would like an honest discussion about what happened to consider these possibilities, too.

My thoughts on "both": in that case, I wonder if it's more like a merge, or more like a Jekyll/Hyde thing

I feel less strongly that this is an "unusually important question" that needs an accurate / precise answer.

It seems like both A and B are bad scenarios that the EA movement should be more robust against, and it seems clear that regardless of which scenario (or some other possibility/combination) was true, the EA movement has room to improve when it comes to preventing / mitigating the harms from such risks.

I think, rather than over-indexing on the minutiae of SBF's personal philosophy or psyche, it's probably more useful for the EA movement to think about how it can strengthen itself against movement-related risks generally going forward. It's probably more useful for those steering the EA movement to consider things like more transparent systems and better governance, to find ways to reduce the risk of any one individual or small groups of people taking actions that bring risks to the entire EA movement, to try work out what else might lead to large gaps between what the "EA ideal" is and what "EA-in-practice" could end up looking like.

[written hastily, not very confident]

I just read the interview on Vox and he sounded very cynical. I didn't and still don't know SBF very well, so don't know if this is a usual tone for him or not, but the conversation leading into the quoted bit was him saying that people take good/bad perspectives unfairly. Given the situation that happened with CZ (rival, possible back stabber) it sounded like a bitter moment to me rather than a genuine comment on his stance about ethics. The commentary about him in the crypto world is so negative and demonising that I can see why he might be cynical about it, esp for those who supposedly shared his vision but turned their back on him as soon as there was rumour of an issue. Not defending him, but just saying he may be fixated on the recent turn of events, and may be feeling defensive rather than compassionate about the losses during the interview. His portion of the DMs were not very coherent anyway. 

All this to say: I don't know how much this reflects his views. The quote itself read like the 2nd option to me (a lie to win reputation (edit: not necessarily money)), but it also just sounds like commentary about his rival's tactics on him that set the catalyst for his bankruptcy, that he's hung up on.  

I don't know what you mean by "naive utilitarian." Do you mean someone with genuine good intent for altruism?  So that option A is SBF had good intent and used fraud as a tool for altruism.  And option B is SBF had bad intent and used altruism as a tool for fraud.

I think it's option B. First, he has a lot of other characteristics of a sociopath with bad intent. These types are typically extremely charismatic and skilled at leadership, group mind control, and manipulation. Lots of people are saying they were enthralled by him and impressed by his charisma. Same as a cult leader who are typically sociopaths. 

Second, he was blowing a ton of money on personal extreme luxury like high end real estate. That money could have been donated instead of spent on himself and wealthy family/friends. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Global health & development
59
John Salter
· · 4m read
6
2 authors
· · 3m read