Three social forces at the root of FTX's collapse
Hi folks, I shared some thoughts I wrote up about Sam Bankman-Fried. I worry that there's a bit of a social cascade that's leading us to draw the wrong lessons from what happened. I'm not 100% confident in either the facts of what happened -- though, as a former securities litigator in the post 2008 period, I think I have more experience than most -- but I don't see a particularly compelling case for fraud. I also think the focus on a single person's supposed indiscretions, whether true or otherw... (read more)
Haha, I wrote a similarly titled article sharing the premise that Sam's actions seem more indicative of a mistake than a fraud: https://forum.effectivealtruism.org/posts/w6aLsNppuwnqccHmC/in-defense-of-sbf
I appreciated the personal notes about SBFs interactions with the animal welfare community. I do think the tribalism EA tribalism element is very real as well. Also appreciate the point about trying to work on something intrinsically motivating - I'm not sure that it's possible for every individual but I do feel like my own intrinsic love of work helps a lot with putting in a lot of time and effort!
accidental duplicate post
It was shared here - https://forum.effectivealtruism.org/posts/YS3gn2KRR9rEBgjvJ/sense-making-around-the-ftx-catastrophe-a-deep-dive-podcast
Starting a NYE donation push. An EA has already committed $40k. Aiming for $111k, tho the ultimate goal is to get a group together to match and discuss opportunities that can lead to a new year of giving. (I'm recruiting many folks who don't give substantially before. I hope to have more EAs who can fuel conversation toward effectively combating preventable suffering.) Also, experimenting with Twitter promotion in this age of Elon.. https://twitter.com/bbertucc/status/1597980309256957952 .. Feel free to email or DM me if I'm too slow on the form reply .. blake[at]philosophers[dot]group ..
Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.
Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a "robustness across future worlds" dimension to the ITN framework.
Epistemic status: low confidence
In cause/intervention exploration, evaluation and prioriti... (read more)
Epistemic status: I only spent 10 minutes thinking about this before I started writing.
Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)
This can be seen as a version of retroactive funding, but it's special in that the funder makes a pre-commitment.
(I don't know a lot about retroactive funding/impact m... (read more)
Four podcasts on animal advocacy that I recommend:
Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it's worth expanding into a top-level post)
link to the talk; alternative version with clearer audio, whose contents - I guess - are similar, but I'm not sure. (This shortform doesn't cover all content of the talk, and has likely misinterpreted something in the ta... (read more)
Statement: This shortform is worth expanding into a top-level post.
Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes.
If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.
This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions.
This has also been added to the post’s appendix and assumes some familiarity with the post.
Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model Inputs that are not considered include: historic spending on research and influence, rather... (read more)
The FTX crisis through the lens of wikipedia pageviews.
(Relevant: comparing the amounts donated and defrauded by EAs)
1. In the last two weeks, SBF has about about 2M views to his wikipedia page. This absolutely dwarfs the number of pageviews to any major EA previously.
2. Viewing the same graph on a logarithmic scale, we can see that even before the recent crisis, SBF was the best known EA. Second was Moskovitz, and roughly tied at third are Singer and Macaskill.
3. Since the scandal, many people will have heard about effective altruism, in a negative light.... (read more)
Updated pageview figures:
There are apparently five films/series/documentaries coming up on SBF - these four, plus Amazon.
Check out Tom Barnes' post on Air Pollution, a neglected problem.
I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.
I’m still in the process of understanding what happened, and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the rig... (read more)
Do you plan to comment in a few weeks, a few months, or not planning to comment publicly? Or is that still to be determined?
We should be donating more frequently so we're happier and feel more encouraged to donateWe know we get some happiness and fulfillment from donating money to a cause we care about (for instance see https://www.science.org/content/article/secret-happiness-giving). If we could get even more joy from donating the same amount of money, then it would make us more happy (benefitting ourselves) and encourage us to keep giving more (benefitting others). To me, there's a huge difference between donating $10,000 at once to a single charity and donating $100 one... (read more)
I used to donate monthly instead of at the end of the year. I eventually decided there were advantages to donating at the end of the year* , though there may be ways to seek both benefits like donating a small portion monthly to get the good good feelings more often.
* orgs have a more complete picture of their funding need, donation matching opportunities, maybe you'd benefit from something like donating stock which may have some overhead you don't want to repeat, you have the most information available, evaluators have put out their new recommendations, ...
Putting things in perspective: what is and isn't the FTX crisis, for EA?
In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't... (read more)
Fwiw I'm not sure it badly damages the publishability. It might lead to more critical papers, though.
Charitable giving epistemology or doing your due diligence
I was just reading the 80,000 Hours email on how they could have seen the warning signs with FTX and whether EA has made any mistakes taking his money. I think it is a worthwhile question, but the answer will seem like a Gordian knot unless you separate two distinct issues.
The only way to avoid something any future SBFs is to look at the books. It's the only way that EA could have really known about SBF's problems beforehand too. This is obviously a good way to kill off a lot of charitable giving. ... (read more)
Hi, I have a blog where I talk about lots of issues related to EA and utilitarianism -- here it is if anyone's interested. https://benthams.substack.com/
The list of minor examples of muddling solutions includes:
So the EA Forum has, like, an ancestor? Is this common knowledge? Lol
Felicifia: not functional anymore but still available to view. Learned about thanks to a Tweet from Jacy
From Felicifia Is No Longer Accepting New Users:
Update: threw together
EAs should be aware that it is easy to traumatize (their own) kids with apocalyptic narratives.
EAs population is getting older: we are no longer in our early twenties but at a more respectable age. That alone will lead to more and more children being born to EAs. MacAskill recommends in WWOTF having children. Not sure how many babies will his endorsement ensure, but likely nonzero.
Raising a child is hard, but EAs have probably one additional opportunity to mess it up a bit more than the average parent. Since we believe the world might end if we don't do en... (read more)
Increasing/decreasing one's AGI timelines decrease/increase the importance  of non-AGI existential risks because there is more/less time for them to occur.
Further, as time passes and we get closer to AGI, the importance of non-AI x-risk decreases relative to AI x-risk. This is a particular case of the above claim.
but not necessarily tractability & neglectedness
If we think that nuclear/bio/climate/other work becomes irrelevant post-AGI, which seems very plausible to me
SBF's views on utilitarianism
After hearing about his defrauding FTX, like everyone else, I wondered why he did it. I haven't met Sam in over five years, but one thing that I can do is take a look at his old Felicifia comments. At that time, back in 2012, Sam identified as an act utilitarian, and said that he would only follow rules (such as abstaining from theft) only if and when there was a real risk of getting caught. You can see this in the following pair of quotes.
Quote #1. Regarding the Parfit's Hiker thought experiment, he said:
I'm not sure I underst
I could go around stealing money from people because I can spend the money in a more utilitarian way than they can, but that wouldn't be the utilitarian thing to do because I was leaving out of my calculation the fact that I may end up in jail if I do so.
Wow, I guess he didn't pay heed to his own advice here then!
Maybe he became deluded about his changes of success, and simply mis-calculated, although this seems unlikely.
I don't think this is that unlikely. He came across as a deluded megalomaniac in the chat with Kelsey (like even now he thinks there's a decent chance he can make things right!)