Research scholar @ FHI and assistant to Toby Ord. Philosophy student before that.
Agreed. I don't think this video got anything badly wrong, but do be aware that there are plenty of EA types on this forum and elsewhere who would be happy to read over and comment on scripts.
Thanks, good catch! Possibly should have read over my reply...
Thanks for making both those updates :)
I should also mention that Toby Ord's 1/6 (17ish%) figure is for the chance of extinction this century, which isn't made totally clear in the video (although I appreciate not much can be done about that)!
Great work! I'm really excited about seeing more quality EA/longtermism-related content on YouTube (in the mold of e.g. Kurzgesagt and Rob Miles) and this was a fun and engaging example. Especially enjoyed the cartoon depictions of Nick Bostrom and Toby Ord :)
Quick note: the link in the video description for 'The Case for Strong Longtermism' currently links to an old version. It has since been significantly revised, so you might consider linking to that instead.
The practice of foot binding stands out for me. It originated in China as early as the 10th century, and remained commonplace up until the early 20th. Foot binding was painful and permanently disabling, and rendered women effectively housebound and wholly dependent on their husbands.
From the tiny amount I've read, it sounds like the practice was sustained for so long through some combination of (Neo-)Confucian attitudes, and its entrenched role as a marker of beauty / femininity / honour / national identity which was only really possible to escape collectively — similar to less terrible but more familiar examples of harmful beauty standards. Top-down enforcement was not entirely necessary: the Shunzhi Emperor of the Qing dynasty even tried to abolish the practice, but failed partly owing to its popular support.
The especially sad thing is how contingent its origins seem to be — upper-class women began to imitate a story about a court dancer to the emperor, who reputedly bound her feet "into the shape of a new moon". The practice took hold as a status symbol among the elite, and spread throughout China.
I would be confident in saying at least half a billion women were subjected to this. One estimate claims some 2 billion women broke and bound their feet in total.
If skepticism about free will renders the EA endeavor void, then wouldn't it also render any action-guiding principles void (including principles about what's best to do out of self-interest)? In which case, it seems odd to single out its consequences for EA.
You sometimes see some (implicit) moving between "we did this good thing, but there's a sense in which we can't take credit, because it was determined before we chose to do it" to "we did this good thing, but there's a sense in which we can't take credit, because it would have happened whether or not we chose to do it", where the latter can be untrue even if the former always true. The former doesn't imply anything about what you should have done instead, while the latter does but has nothing to do with skepticism about free will. So even if determinism undermines certain kinds of "you ought to x" claims, it doesn't imply "you ought to not bother doing x" — it does not justify resignation. There is a parallel (though maybe more problematic) discussion about what to do about the possibility of nihilism.
Anyway, even skeptics about free will can agree that ex post it was good that the good thing happened (compared to it not happening), and they can agree that certain choices were instrumental in it happening (if the choices weren't made, it wouldn't have happened). Looking forward, the skeptic could also understand "you ought to x" claims as saying "the world where you do x will be better than the world where you don't, and I don't have enough information to know which world we're in". They also don't need to deny that people are and will continue to be sensitive to "ought" claims in the sense that explaining to people why they ought to do something can make them more likely to do it compared to the world where you don't explain why. Basically, counterfactual talk can still make sense for determinists. And all this seems like more then enough for anything worth caring about — I don't think any part of EA requires our choices to be undetermined or freely made in some especially deep way.
Some things you might be interested in reading —
I think maybe this free will stuff does matter in a more practical way when it comes to prison reform and punishment, since (plausibly) support for 'retributive' punishment vs rehabilitation comes from attitudes about free will and responsbility that are either incoherent or wrong in a influencable way.
I think it would be an odd move for 80K to launch a TikTok before a YouTube or Instagram. But 80K aside, I am very keen to see more high-quality EA video content, on TikTok and YouTube. Isabelle Boemeke's Isodope project (promoting nuclear energy) is an amazing if somewhat sui generis example.
Haven't read this fully yet, but I'm really excited to see someone thinking about applications of complexity science / economics for EA, and I was vaguely intending to write something along these lines if nobody else did soon. So thanks for posting!