# Shortform Content

Three social forces  at the root of FTX's collapse

Hi folks, I shared some thoughts I wrote up about Sam Bankman-Fried. I worry that there's a bit of a social cascade that's leading us to draw the wrong lessons from what happened. I'm not 100% confident in either the facts of what happened -- though, as a former securities litigator in the post 2008  period, I think I have more experience than most -- but I don't see a particularly compelling case for fraud. I also think the focus on a single person's supposed indiscretions, whether true or otherw...

Haha, I wrote a similarly titled article sharing the premise that Sam's actions seem more indicative of a mistake than a fraud: https://forum.effectivealtruism.org/posts/w6aLsNppuwnqccHmC/in-defense-of-sbf

I appreciated the personal notes about SBFs interactions with the animal welfare community. I do think the tribalism EA tribalism element is very real as well. Also appreciate the point about trying to work on something intrinsically motivating - I'm not sure that it's possible for every individual but I do feel like my own intrinsic love of work helps a lot with putting in a lot of time and effort!

accidental duplicate post

[This comment is no longer endorsed by its author]Reply

Starting a NYE donation push. An EA has already committed $40k. Aiming for$111k, tho the ultimate goal is to get a group together to match and discuss opportunities that can lead to a new year of giving. (I'm recruiting many folks who don't give substantially before. I hope to have more EAs who can fuel conversation toward effectively combating preventable suffering.) Also, experimenting with Twitter promotion in this age of Elon.. https://twitter.com/bbertucc/status/1597980309256957952 .. Feel free to email or DM me if I'm too slow on the form reply .. blake[at]philosophers[dot]group ..

Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.

Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a "robustness across future worlds" dimension to the ITN framework.

Epistemic status: low confidence

In cause/intervention exploration, evaluation and prioriti...

Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)

Detailed considerations:

This can be seen as a version of retroactive funding, but it's special in that the funder makes a pre-commitment.

(I don't know a lot about retroactive funding/impact m...

Four podcasts on animal advocacy that I recommend:

• Freedom of Species (part of 3CR radio station)
Covers a wide range of topics relevant to animal advocacy, from protest campaigns to wild animal suffering to VR. More of its episodes are on the "protest campaigns" end which is less popular in EA, but I think it's good to have an alternative perspective, if only for some diversification.
• Knowing Animals (hosted by Josh Milburn)
An academic-leaning podcast that focuses on Critical Animal Studies, which IMO is like the academic equivalent of animal advocacy. Most
...

Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it's worth expanding into a top-level post)

link to the talk; alternative version with clearer audio, whose contents - I guess - are similar, but I'm not sure. (This shortform doesn't cover all content of the talk, and has likely misinterpreted something in the ta...

Statement: This shortform is worth expanding into a top-level post.

Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes.

If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.

This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions.

This has also been added to the post’s appendix and assumes some familiarity with the post.

Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model[1] Inputs that are not considered include: historic spending on research and influence, rather...

1. In the last two weeks, SBF has about about 2M views to his wikipedia page. This absolutely dwarfs the number of pageviews to any major EA previously.

2. Viewing the same graph on a logarithmic scale, we can see that even before the recent crisis, SBF was the best known EA. Second was Moskovitz, and roughly tied at third are Singer and Macaskill.

3. Since the scandal, many people will have heard about effective altruism, in a negative light....

Showing 3 of 10 replies (Click to show all)

Updated pageview figures:

• "effective altruism": peaked at ~20x baseline. Of all views, 10.5% were in Nov 9-27
• "longtermism": peaked ~5x baseline. Of all views, 18.5% in Nov 9-27.
• "existential risk": ~2x. 0.8%.

There are apparently five films/series/documentaries coming up on SBF - these four, plus Amazon.

6Steven Byrnes11d
The implications for "brand value" would depend on whether people learn about "EA" as the perpetrator vs. victim. For example, I think there were charitable foundations that got screwed over by Bernie Madoff, and I imagine that their wiki articles would have also had a spike in views when that went down, but not in a bad way.
1RyanCarey11d
I agree in principle, but I think EA shares some of the blame here - FTX's leadership group consisted of four EAs. It was founded for ETG reasons, with EA founders and with EA investment, by Sam, an act utilitarian, who had been a part of EA-aligned groups for >10 years, and with a foundation that included a lot of EA leadership, and whose activities consisted mostly of funding EAs.

Check out Tom Barnes' post on Air Pollution, a neglected problem.

I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.

I’m still in the process of understanding what happened,  and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.

I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the rig...

Do you plan to comment in a few weeks, a few months, or not planning to comment publicly? Or is that still to be determined?

We should be donating more frequently so we're happier and feel more encouraged to donate

We know we get some happiness and fulfillment from donating money to a cause we care about (for instance see https://www.science.org/content/article/secret-happiness-giving). If we could get even more joy from donating the same amount of money, then it would make us more happy (benefitting ourselves) and encourage us to keep giving more (benefitting others).

To me, there's a huge difference between donating $10,000 at once to a single charity and donating$100 one...

I used to donate monthly instead of at the end of the year. I eventually decided there were advantages to donating at the end of the year* , though there may be ways to seek both benefits like donating a small portion monthly to get the good good feelings more often.

* orgs have a more complete picture of their funding need, donation matching opportunities, maybe you'd benefit from something like donating stock which may have some overhead you don't want to repeat, you have the most information available, evaluators have put out their new recommendations, ...

Putting things in perspective: what is and isn't the FTX crisis, for EA?

In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't...

Showing 3 of 4 replies (Click to show all)

Fwiw I'm not sure it badly damages the publishability. It might lead to more critical papers, though.

3Jonas Vollmer5d
Some discussion on one of the main philosophy discussion fora: https://dailynous.com/2022/11/18/ftx-moral-philosophy-public-philosophy/ [https://dailynous.com/2022/11/18/ftx-moral-philosophy-public-philosophy/]
3RyanCarey6d
It's what global priorities researchers tell me is happening.

Charitable giving epistemology or doing your due diligence

I was just reading the 80,000 Hours email on how they could have seen the warning signs with FTX and whether EA has made any mistakes taking his money. I think it is a worthwhile question, but the answer will seem like a Gordian knot unless you separate two distinct issues.

The only way to avoid something any future SBFs is to look at the books. It's the only way that EA could have really known about SBF's problems beforehand too. This is obviously a good way to kill off a lot of charitable giving. ...

Hi, I have a blog where I talk about lots of issues related to EA and utilitarianism -- here it is if anyone's interested.  https://benthams.substack.com/

#### Muddling Solutions To New Problems

The list of minor examples of muddling solutions includes:

• driving through traffic on the freeway and reaching a standstill. Bored,you turn on a podcast.
• visiting a doctor, she informs you that you have a benign but growing tumor. Upset, you schedule an inexpensive surgery to remove it.
• coming home, you find a tree branch broke through your attic window. Annoyed, you call a repairman to replace the window.
• walking from your home to a nearby convenience store, you step in some smelly dog poop. Upset, you scrape some of i
...

So the EA Forum has, like, an ancestor? Is this common knowledge? Lol

Felicifia: not functional anymore but still available to view. Learned about thanks to a Tweet  from Jacy

Update: threw together

• some data with authors, post title names, date, and number of replies (and messed one section up so some rows are missing links)
• A rather long PDF with the posts and replies together (for quick keyword searching), with decent but not great formatting

EAs should be aware that it is easy to traumatize (their own) kids with apocalyptic narratives.

EAs population is getting older: we are no longer in our early twenties but at a more respectable age. That alone will lead to more and more children being born to EAs. MacAskill recommends in WWOTF having children. Not sure how many babies will his endorsement ensure, but likely nonzero.

Raising a child is hard, but EAs have probably one additional opportunity to mess it up a bit more than the average parent. Since we believe the world might end if we don't do en...

Increasing/decreasing one's AGI timelines decrease/increase the importance [1] of non-AGI existential risks because there is more/less time for them to occur[2].

Further, as time passes and we get closer to AGI, the importance of non-AI x-risk decreases relative to AI x-risk. This is a particular case of the above claim.

1. ^

but not necessarily tractability & neglectedness

2. ^

If we think that nuclear/bio/climate/other work becomes irrelevant post-AGI, which seems very plausible to me

SBF's views on utilitarianism

After hearing about his defrauding FTX, like everyone else, I wondered why he did it. I haven't met Sam in over five years, but one thing that I can do is take a look at his old Felicifia comments. At that time, back in 2012, Sam identified as an act utilitarian, and said that he would only follow rules (such as abstaining from theft) only if and when there was a real risk of getting caught. You can see this in the following pair of quotes.

Quote #1. Regarding the Parfit's Hiker thought experiment, he said:

I'm not sure I underst

...

I could go around stealing money from people because I can spend the money in a more utilitarian way than they can, but that wouldn't be the utilitarian thing to do because I was leaving out of my calculation the fact that I may end up in jail if I do so.

Wow, I guess he didn't pay heed to his own advice here then!

Maybe he became deluded about his changes of success, and simply mis-calculated, although this seems unlikely.

I don't think this is that unlikely. He came across as a deluded megalomaniac in the chat with Kelsey (like even now he thinks there's a decent chance he can make things right!)