Rebecca Kagan's resignation from the board of Effective Ventures (EV) due to disagreements regarding the handling of the FTX crisis has sparked an intense discussion within the Effective Altruism (EA) community. Kagan believes that the EA community needs an external, public investigation into its relationship with FTX and its founder, Sam Bankman-Fried (SBF), to address mistakes and prevent future harm. She also calls for clarity on EA leadership and their responsibilities to avoid confusion and indirect harm.
The post generated extensive debate, with many community members echoing the call for a thorough, public investigation and postmortem. They argue that understanding what went wrong, who was responsible, and what structural and cultural factors enabled these mistakes is crucial for learning, rebuilding trust, and preventing future issues. Some point to the concerning perception gap between those who had early concerns about SBF and those who seemingly ignored or downplayed these warnings.
However, others raise concerns about the cost, complexity, and legal risks involved in conducting a comprehensive investigation. They worry about the potential for re-victimizing those negatively impacted by the FTX fallout and argue that the key facts may have already been uncovered through informal discussions.
Alternative suggestions include having multiple individuals with relevant expertise conduct post-mortems, focusing on improving governance and organizational structures, and mitigating the costs of speaking out by waiving legal obligations or providing financial support for whistleblowers.
The thread also highlights concerns about recent leadership changes within EA organizations. Some argue that the departure of individuals known for their integrity and thoughtfulness regarding these issues raises questions about the movement's priorities and direction. Others suggest that these changes may be less relevant due to factors such as the impending disbanding of EV or reasons unrelated to the FTX situation.
Lastly, the discussion touches on the concept of "naive consequentialism" and its potential role in the FTX situation and other EA decisions. The OpenAI board situation is also mentioned as an example of the challenges facing the EA community beyond the FTX crisis, suggesting that the core issues may lie in the quality of governance rather than a specific blind spot.
Overall, the thread reveals a community grappling with significant trust and accountability issues in the aftermath of the FTX crisis. It underscores the urgent need for the EA community to address questions of transparency, accountability, and leadership to maintain its integrity and continue to positively impact the world.
What are the most surprising things that emerged from the thread?
Based on the summaries, a few surprising or noteworthy things emerged from the "Quick Update on Leaving the Board of EV" thread:
The extent of disagreement and concern within the EA community regarding the handling of the FTX crisis, as highlighted by Rebecca Kagan's resignation from the EV board and the subsequent discussion.
The revelation of a significant perception gap between those who had early concerns about Sam Bankman-Fried (SBF) and those who seemingly ignored or downplayed these warnings, suggesting a lack of effective communication and information-sharing within the community.
The variety of perspectives on the necessity and feasibility of conducting a public investigation into the EA community's relationship with FTX and SBF, with some advocating strongly for transparency and accountability, while others raised concerns about cost, complexity, and potential legal risks.
The suggestion that recent leadership changes within EA organizations may have been detrimental to reform efforts, with some individuals known for their integrity and thoughtfulness stepping back from their roles, raising questions about the movement's priorities and direction.
The mention of the OpenAI board situation as another example of challenges facing the EA community, indicating that the issues extend beyond the FTX crisis and may be rooted in broader governance and decision-making processes.
The discussion of "naive consequentialism" and its potential role in the FTX situation and other EA decisions, suggesting a need for the community to re-examine its philosophical foundations and decision-making frameworks.
The emotional weight and urgency conveyed by many community members regarding the need for transparency, accountability, and reform, underscoring the significance of the FTX crisis and its potential long-term impact on the EA movement's credibility and effectiveness.
These surprising elements highlight the complex nature of the challenges facing the EA community and the diversity of opinions within the movement regarding the best path forward.
My opinionated and annotated summary / distillation of the SBF’s account of the FTX crisis based on recent articles and interviews (particularly this Bloomberg article).
Over the past year, the macroeconomy changed and central banks raised their interest rates which led to crypto losing value. Then, after a crypto crash in May, Alameda needed billions, fast, to repay its nervous lenders or would go bust.
According to sources, Alameda’s CEO Ellison said that she, SBF, Gary Wang and Nishad Singh had a meeting re: the shortfall and decided to loan Alameda FTX user funds. If true, they knowingly committed fraud.
SBF’s account is different:
Generally, he didn’t know what was going on at Alameda anymore, despite owing 90% of it. He disengaged because he was busy running FTX and for 'conflict of interest reasons'.[1]
He didn’t pay much attention during the meeting and it didn’t seem like a crisis, but just a matter of extending a bit more credit to Alameda (from $4B by $6B[2] to ~$10B[3]). Alameda already traded on margin and still had collateral worth way more than enough to cover the loan, and, despite having been the liquidity provider historically, seemed to be less important over time, as they made up an ever smaller fraction of all trades.
Yet they still had larger limits than other users, who’d get auto-liquidated if their positions got too big and risky. He didn’t realize that Alameda’s position on FTX got much more leveraged, and thought the risk was much smaller. Also, a lot of Alameda’s collatoral was FTT, ~FTX stock, which rapidly lost value.
If FTX had liquidated, Alameda and maybe even their lenders, would’ve gone bust. And even if FTX didn’t take direct losses, users would’ve lost confidence, causing a hard-to-predict cascade of events.
If FTX hadn’t margin-called there was ~70% chance everything would be OK, but even if not, downside and risk would have been much smaller, and the hole more manageable.
SBF thought FTX and Alameda’s combined accounts were:
Debt: $8.9B
Assets:
Cash: $9B
'Less liquid': $15.4B
'Illiquid': $3.2B
Naively, despite some big liabilities, they should be able to cover it.
But crucially, they actually had $8B less cash, since FTX didn’t have a bank account when they first started, users sent >$5B[4] to Alameda, and then their bad accounting double-counted by crediting both. Many users’ funds never moved from Alameda, and FTX users' accounts were credited with a notional balance that did not represent underlying assets held by FTX—users traded with crypto that did not actually exist.
This is why Alameda invested so much, while FTX didn’t have enough money when users tried to withdraw.[5]
Even after FTX/Alameda profits (at least $10B[8]) and the VC money they raised ($2B[9] - aside: after raising $400M in Jan, they tried to raise money again in July[10] and then again in Sept.[11])—all this adds to minus $6.5B. The FT says FTX is short of $8B[12] of ~1M users’[13] money. In sum, this was because he didn’t realize that they spent way more than they made, paid very little attention to expenses, was really lazy about mental math, and there was a diffusion of responsibility amongst leadership.
While FTX.US was more like a bank and highly regulated and had as much reserves as users put in, FTX int’l was an exchange. Legally, exchanges don’t lend out users' funds, but users themselves lend out their funds to other users (of which Alameda was just one of). FTX just facilitated this. An analogy: file-sharing platforms like Napster never upload music themselves illegally, but just facilitate peer-to-peer sharing.
Much more than $1B (SBF ‘~$8B-$10B at its peak’[14]) of user funds opted into peer-to-peer lending / order book margin trading (others say that this was less than $4B[15]; all user deposits were $16B[16]). Also, while parts of the terms of service say that FTX never lends out users' assets, those are overridden by other parts of the terms of service and he isn’t aware that FTX violated the terms of use (see FTX Terms of Service).
—
For me, the key remaining questions are:
Did many users legally agree to their crypto being lent out without meaning to, by accepting the terms of service, even if they didn’t opt into the lending program? If so, it might be hard to hold FTX legally accountable, especially since they’re in the Bahamas.
If they did effectively lend out customer funds, did they do it multiple times (perhaps repeatedly since the start of FTX), or just once?
Did FTX make it look like users' money were very secure like a highly regulated bank and that their money wasn’t at risk e.g. by partnering with Visa for crypto debit cards[17] or by blurring the line between FTX.us (‘A safe and easy way to get into crypto’) and FTX.com?
Did FTX sweep users to opt into peer-to-peer lending?
ht/ to Ryan Carey: ‘notably some of this could be consistent with macro conditions crushing their financial position, especially the VC investments in crypto.’
How can we encourage people to include a 75 word Tl;dr: in every post? 75 words seems to be what is visible in the preview pane when hovering over the title of a post.
Perhaps after hitting submit, people could be prompted if they wanted to add a Tl;dr to the top of the post.
give the writer +2 karma if they fill a 75w "TL;DR" section in the post;
Have a section "TL;DR-ed posts" in the front page
Start a new convention by writing a comment "great post, thanks, it'd be even better if you had a TL;DR, what d'you think?" after each post you see (if you do that, I'll promptly follow)
You can quickly check what others are thinking about the articles you read online through a "bookmarklet": just one click on the bookmark in your browser takes you right to the Twitter response of any article.
In chrome you can create this by going to: chrome://bookmarks/
"Add new bookmark" Bookmark name: Twitter response
~140,000 people from Hong Kong might move to the UK this year (~322k in total over the next 5 years [source]).
Are they particularly well placed to work on Sino-Western relations? (Because they're better at bridging cultural (and linguistic) gap and are likely highly determined). Should we prioritize helping them somehow?
Hong Kong linkup is a organisation for Brits to help their HK peers settle in. If you'd like a way to get to know the community of new HK immigrants, it's probably a good option. I've signed up already.
https://www.hklinkup.uk/
I would have thought they would be unusually badly placed, because the regime will view them as traitors, for the same reason I would not recommend using apostates for outreach to muslims.
That was precisely my point actually—just like Hirsi Ali might be well-placed to advocate for women's rights within Islam, people from Hong Kong might be well placed to highlight e.g. human rights issues in China.
Ahh, in that case I agree that HKers, or even better Uighurs, would be well placed. But my impression was that 80k etc.'s concerns about China mainly revolved around things like improving Western-Chinese coordination to reduce the risk of war, AI race or climate change, rather than human rights. I would think that putting pressure on them for human rights abuses would be likely to make this worse, as the CCP views such activism as an attack on their system. It is hard to cooperate with someone if they are denouncing you as evil and funding your dissidents.
A draft of Eric Schwitzgebel's new book 'The Weirdness of the World' from October 26, 2021 with a few EA-relevant themes:
1 In Praise of Weirdness
2 If Materialism Is True, the United States Is Probably Conscious
3 Universal Bizarreness and Universal Dubiety
4 1% Skepticism
5 Kant Meets Cyberpunk
6 An Innocent and Wonderful Definition of Consciousness
7 Experimental Evidence for the Existence of an External World
8 The Loose Friendship of Visual Experience and Reality
9 Is There Something It’s Like to Be a Garden Snail? Or: How Sparse or Abundant Is Consciousness in the Universe?
10 The Moral Status of Future Artificial Intelligence: Doubts and a Dilemma
11 Weirdness and Wonder
Quote:
"1. What I Will Argue in This Book.
Consider three huge questions: What is the fundamental structure of the cosmos? How does human consciousness fit into it? What should we value? What I will argue in this book – with emphasis on the first two questions, but also sometimes drawing implications for the third – is (1.) the answers are currently beyond our capacity to know, and (2.) we do nonetheless know at least this: Whatever the truth is, it’s weird. Careful reflection will reveal all of the viable theories on these grand topics to be both bizarre and dubious. In Chapter 3 (“Universal Bizarreness and Universal Dubiety”), I will call this the Universal Bizarreness thesis and the Universal Dubiety thesis. Something that seems almost too crazy to believe must be true, but we can’t resolve which of the various crazy-seeming options is ultimately correct. If you’ve ever wondered why every wide-ranging, foundations-minded philosopher in the history of Earth has held bizarre metaphysical or cosmological views (each philosopher holding, seemingly, a different set of bizarre views), Chapter 3 offers an explanation. I will argue that given our weak epistemic position, our best big-picture cosmology and our best theories of consciousness are tentative, modish, and strange. Strange: As I will argue, every approach to cosmology and consciousness has bizarre implications that run strikingly contrary to mainstream “common sense”. Tentative: As I will also argue, epistemic caution is warranted, partly because theories on these topics run so strikingly contrary to common sense and also partly because they test the limits of scientific inquiry. Indeed, dubious assumptions about the fundamental structure of mind and world frame or undergird our understanding of the nature and value of scientific inquiry, as I discuss in Chapters 4 (“1% Skepticism”), 5 (“Kant Meets Cyberpunk”), and 7 (“Experimental Evidence for the Existence of an External World”)
Modish: On a philosopher’s time scale – where a few decades ago is “recent” and a few decades hence is “soon” – we live in a time of change, with cosmological theories and theories of consciousness rising and receding based mainly on broad promise and what captures researchers’ imaginations. We ought not trust that the current range of mainstream academic theories will closely resemble the range in a hundred years, much less the actual truth. Even the common garden snail defies us (Chapter 9, “Is There Something It’s Like to Be a Garden Snail?”). Does it have experiences? If so, how much and of what kind? In general, how sparse or abundant is consciousness in the universe? Is consciousness – feelings and experiences of at least the simplest, least reflective kind – cheap and common, maybe even ubiquitous? Or is consciousness rare and expensive, requiring very specific conditions in the most sophisticated organisms? Our best scientific and philosophical theories conflict sharply on these questions, spanning a huge range of possible answers, with no foreseeable resolution. The question of consciousness in near-future computers or robots similarly defies resolution, but with arguably more troubling consequences: If constructions of ours might someday possess humanlike emotions and experiences, that creates moral quandaries and puzzle cases for which our ethical intuitions and theories are unprepared. In a century, the best ethical theories of 2022 might seem as quaint and inadequate as medieval physics applied to relativistic rocketships (Chapter 10, “The Moral Status of Future Artificial Intelligence: Doubts and a Dilemma”)."
I wonder how much demand there'd be for a 'Hackernews' style high-frequency link only subreddit. I feel there's too much of a barrier to post links on the EA forum. Thoughts?
Hi Hauke. Sadly that's an admin-only feature that involves editing raw HTML.[1] We use it for Holden's posts because he's crossposted them from his own blog where he uses them. We have talked about adding Forum-native collapsable sections — I'll take your question as an endorsement.
There are multiple reasons this can't be opened up to all uses. The first albeit surmountable one is that it is relatively easy to add cross site scripting vulnerabilities when editing raw HTML.
In October 2019, Abhijit Banerjee, Esther Duflo, and Michael Kremer jointly won the 51st Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel "for their experimental approach to alleviating global poverty." But what is the exact scope of their experimental method, known as randomized control trials (RCTs)? Which sorts of questions are RCTs able to address and which do they fail to answer? The first of its kind, Randomized Control Trials in the Field of Development: A Critical Perspective provides answers to these questions, explaining how RCTs work, what they can achieve, why they sometimes fail, how they can be improved and why other methods are both useful and necessary. Bringing together leading specialists in the field from a range of backgrounds and disciplines (economics, econometrics, mathematics, statistics, political economy, socioeconomics, anthropology, philosophy, global health, epidemiology, and medicine), it presents a full and coherent picture of the main strengths and weaknesses of RCTs in the field of development. Looking beyond the epistemological, political, and ethical differences underlying many of the disagreements surrounding RCTs, it explores the implementation of RCTs on the ground, outside of their ideal theoretical conditions and reveals some unsuspected uses and effects, their disruptive potential, but also their political uses. The contributions uncover the implicit worldview that many RCTs draw on and disseminate, and probe the gap between the method's narrow scope and its success, while also proposing improvements and alternatives.
Without disputing the contribution of RCTs to scientific knowledge, Randomized Control Trials in the Field of Development warns against the potential dangers of their excessive use, arguing that the best use for RCTs is not necessarily that which immediately springs to mind. Written in plain language, this book offers experts and laypeople alike a unique opportunity to come to an informed and reasoned judgement on RCTs and what they can bring to development.
Table of Contents
General Introduction, Florent Bédécarrats, Isabelle Guérin, and François Roubaud 0:Randomization in the Tropics Revisited: A Theme and Eleven Variations, Sir Angus Deaton 1:Should the Randomistas (Continue to) Rule?, Martin Ravallion 2:Randomizing Development: Method or Madness?, Lant Pritchett 3:The Disruptive Power of RCTs, Jonathan Morduch 4:RCTs in Development Economics, Their Critics, and Their Evolution, Timothy Ogden 5:Reducing the Knowledge Gap in Global Health Delivery: Contributions and Limitations of Randomized Controlled trials, Andres Garchitorena, Megan Murray, Bethany Hedt-Gauthier, Paul Farmer, and Matthew Bonds 6:Trials and Tribulations: The Rise and Fall of the RCT in the WASH Sector, Dean Spears, Radu Ban, and Oliver Cumming 7:Microfinance RCTs in Development: Miracle or Mirage?, Florent Bédécarrats, Isabelle Guérin, and François Roubaud 8:The Rhetorical Superiority of Poor Economics, Agnès Labrousse 9:Are the 'Randomistas' Evaluators?, Robert Picciotto 10:Ethics of RCTs: Should Economists Care about Equipoise?, Michel Abramowicz and Ariane Szafarz 11:Using Priors in Experimental Design: How Much Are We Leaving on the Table?, Eva Vivalt 12:Epilogue: Randomization and Social Policy Evaluation Revisited, James J. Heckman Interviews
"star-manning is to not only engage with the most charitable version of your opponent’s argument, but also with the most charitable version of your opponent,by acknowledging their good intentions and your shared desires despite your disagreements. In our UBI example, star-manning would be to amend the steel man with something like, “…and you’re in favor of this because you think it will help people lead safer, freer, and more fulfilled lives—which we both want.” If used properly, star-manning can serve as an inoculant against our venomous discourse and a method for planting disputes on common ground rather than a fault line."
The question I'd have about "human enhancement" with technology, is what is one's hard limit to moral goodness, and thus one's "fatedness to the evilness of relative privation of goodness as compared to another" given that we have very little such technology at present, and how can one reliably determine it?
If we had a tag called "Links" for posts that aren't displayed on the front page, then we could have a "Hackernews"/ "Reddit" style section were people can share -without comment- external links related to EA or that could be discussed in the context of EA. This would be different from current "link posts" which might have a higher (imagined) bar to posting.
Along a similar lines, there could be a low effort way for the current Shortform function to emulate Twitter, where the 'magic' sorting algorithm also takes into account the length of the post.
Giving a kidney in exchange for money sounds less "exploitative" (I don't think it is, I am modeling public perception) than providing a kidney in exchange for beauty surgery because expectations are more explicit for money than a beauty surgery.
I think this is basically because
People in need of such surgery are either "irrationally" obsessed with their body image or in need of medical intervention they can't afford.
I imagine giving a kidney in exchange for fat removal will be more regretful:
Receiving money is a reasonably straightforward/bounded event [e.g., not "wasting money" afterward is personal responsibility].
While fat removal implicitly sells persistent results. Getting weight back is plausible, and people are likely to misprice the risks due to wishful thinking. Also, incentives are pretty asymmetric, so "I was misled about efficacy" might be a common complaint.
I find C19 might cause 6m - 87m YYLs (highly dependending on # of deaths). For comparison, substance abuse causes 13m, diarrhea causes 85m YYLs.
Countries often spend 1-3x GDP per capita to avert a DALY, and so the world might want to spend $2-8trn to avert C19 YYLs (could also be a rough proxy for the cost of C19).
One of the many simplifying assumptions of this model is that excludes disability caused by C19 - which might be severe.
AI Summary of the "Quick Update on Leaving the Board of EV" Thread (including comments):
Rebecca Kagan's resignation from the board of Effective Ventures (EV) due to disagreements regarding the handling of the FTX crisis has sparked an intense discussion within the Effective Altruism (EA) community. Kagan believes that the EA community needs an external, public investigation into its relationship with FTX and its founder, Sam Bankman-Fried (SBF), to address mistakes and prevent future harm. She also calls for clarity on EA leadership and their responsibilities to avoid confusion and indirect harm.
The post generated extensive debate, with many community members echoing the call for a thorough, public investigation and postmortem. They argue that understanding what went wrong, who was responsible, and what structural and cultural factors enabled these mistakes is crucial for learning, rebuilding trust, and preventing future issues. Some point to the concerning perception gap between those who had early concerns about SBF and those who seemingly ignored or downplayed these warnings.
However, others raise concerns about the cost, complexity, and legal risks involved in conducting a comprehensive investigation. They worry about the potential for re-victimizing those negatively impacted by the FTX fallout and argue that the key facts may have already been uncovered through informal discussions.
Alternative suggestions include having multiple individuals with relevant expertise conduct post-mortems, focusing on improving governance and organizational structures, and mitigating the costs of speaking out by waiving legal obligations or providing financial support for whistleblowers.
The thread also highlights concerns about recent leadership changes within EA organizations. Some argue that the departure of individuals known for their integrity and thoughtfulness regarding these issues raises questions about the movement's priorities and direction. Others suggest that these changes may be less relevant due to factors such as the impending disbanding of EV or reasons unrelated to the FTX situation.
Lastly, the discussion touches on the concept of "naive consequentialism" and its potential role in the FTX situation and other EA decisions. The OpenAI board situation is also mentioned as an example of the challenges facing the EA community beyond the FTX crisis, suggesting that the core issues may lie in the quality of governance rather than a specific blind spot.
Overall, the thread reveals a community grappling with significant trust and accountability issues in the aftermath of the FTX crisis. It underscores the urgent need for the EA community to address questions of transparency, accountability, and leadership to maintain its integrity and continue to positively impact the world.
What are the most surprising things that emerged from the thread?
Based on the summaries, a few surprising or noteworthy things emerged from the "Quick Update on Leaving the Board of EV" thread:
These surprising elements highlight the complex nature of the challenges facing the EA community and the diversity of opinions within the movement regarding the best path forward.
My opinionated and annotated summary / distillation of the SBF’s account of the FTX crisis based on recent articles and interviews (particularly this Bloomberg article).
Over the past year, the macroeconomy changed and central banks raised their interest rates which led to crypto losing value. Then, after a crypto crash in May, Alameda needed billions, fast, to repay its nervous lenders or would go bust.
According to sources, Alameda’s CEO Ellison said that she, SBF, Gary Wang and Nishad Singh had a meeting re: the shortfall and decided to loan Alameda FTX user funds. If true, they knowingly committed fraud.
SBF’s account is different:
Generally, he didn’t know what was going on at Alameda anymore, despite owing 90% of it. He disengaged because he was busy running FTX and for 'conflict of interest reasons'.[1]
He didn’t pay much attention during the meeting and it didn’t seem like a crisis, but just a matter of extending a bit more credit to Alameda (from $4B by $6B[2] to ~$10B[3]). Alameda already traded on margin and still had collateral worth way more than enough to cover the loan, and, despite having been the liquidity provider historically, seemed to be less important over time, as they made up an ever smaller fraction of all trades.
Yet they still had larger limits than other users, who’d get auto-liquidated if their positions got too big and risky. He didn’t realize that Alameda’s position on FTX got much more leveraged, and thought the risk was much smaller. Also, a lot of Alameda’s collatoral was FTT, ~FTX stock, which rapidly lost value.
If FTX had liquidated, Alameda and maybe even their lenders, would’ve gone bust. And even if FTX didn’t take direct losses, users would’ve lost confidence, causing a hard-to-predict cascade of events.
If FTX hadn’t margin-called there was ~70% chance everything would be OK, but even if not, downside and risk would have been much smaller, and the hole more manageable.
SBF thought FTX and Alameda’s combined accounts were:
Naively, despite some big liabilities, they should be able to cover it.
But crucially, they actually had $8B less cash, since FTX didn’t have a bank account when they first started, users sent >$5B[4] to Alameda, and then their bad accounting double-counted by crediting both. Many users’ funds never moved from Alameda, and FTX users' accounts were credited with a notional balance that did not represent underlying assets held by FTX—users traded with crypto that did not actually exist.
This is why Alameda invested so much, while FTX didn’t have enough money when users tried to withdraw.[5]
They spent $10.75B on:[6]
Even after FTX/Alameda profits (at least $10B[8]) and the VC money they raised ($2B[9] - aside: after raising $400M in Jan, they tried to raise money again in July[10] and then again in Sept.[11])—all this adds to minus $6.5B. The FT says FTX is short of $8B[12] of ~1M users’[13] money. In sum, this was because he didn’t realize that they spent way more than they made, paid very little attention to expenses, was really lazy about mental math, and there was a diffusion of responsibility amongst leadership.
While FTX.US was more like a bank and highly regulated and had as much reserves as users put in, FTX int’l was an exchange. Legally, exchanges don’t lend out users' funds, but users themselves lend out their funds to other users (of which Alameda was just one of). FTX just facilitated this. An analogy: file-sharing platforms like Napster never upload music themselves illegally, but just facilitate peer-to-peer sharing.
Much more than $1B (SBF ‘~$8B-$10B at its peak’[14]) of user funds opted into peer-to-peer lending / order book margin trading (others say that this was less than $4B[15]; all user deposits were $16B[16]). Also, while parts of the terms of service say that FTX never lends out users' assets, those are overridden by other parts of the terms of service and he isn’t aware that FTX violated the terms of use (see FTX Terms of Service).
—
For me, the key remaining questions are:
FTX Founder Sam Bankman-Fried Says He Can’t Account for Billions Sent to Alameda
‘We kind of lost track’: how Sam Bankman-Fried blurred lines between FTX and Alameda | Financial Times
Sam Bankman-Fried’s trading shop was given special treatment on FTX for years | Financial Times
FTX Founder Sam Bankman-Fried Says He Can’t Account for Billions Sent to Alameda
SBF Reveals FTX Was Selling Assets That Didn't Exist - The Defiant
ht/ to Ryan Carey: ‘notably some of this could be consistent with macro conditions crushing their financial position, especially the VC investments in crypto.’
I think he might refer to this: archive.ph/ATPHq#selection-1981.172-1981.301
Milky Eggs » Blog Archive » What happened at Alameda Research.
Investors Who Put $2 Billion Into FTX Face Scrutiny, Too - The New York Times
Crypto Brokerage Genesis Tries to Raise Funds and Eyes Bankruptcy
FTX in talks to raise up to $1 billion at valuation of about $32 billion, in-line with prior round
‘We kind of lost track’: how Sam Bankman-Fried blurred lines between FTX and Alameda | Financial Times
Sam Bankman-Fried’s trading shop was given special treatment on FTX for years | Financial Times
See video interview here: FTX Founder Sam Bankman-Fried Says He Can’t Account for Billions Sent to Alameda
https://twitter.com/adamscochran/status/1593020920695660546
FTX Tapped Into Customer Accounts to Fund Risky Bets, Setting Up Its Downfall - WSJ
Crypto Exchange FTX's Token Surges 7% After Visa Partnership Report
How can we encourage people to include a 75 word Tl;dr: in every post? 75 words seems to be what is visible in the preview pane when hovering over the title of a post.
Perhaps after hitting submit, people could be prompted if they wanted to add a Tl;dr to the top of the post.
I thought this was a good idea. I have submitted this as a issue here: https://github.com/ForumMagnum/ForumMagnum/issues/4825
Would be cool if we could deploy a really great ML summarization tool to use on posts to make this sort of thing automatic.
You can quickly check what others are thinking about the articles you read online through a "bookmarklet": just one click on the bookmark in your browser takes you right to the Twitter response of any article.
In chrome you can create this by going to:
chrome://bookmarks/
"Add new bookmark"
Bookmark name:
Twitter response
URL:
javascript:window.location='https://twitter.com/search?q='+window.location
~140,000 people from Hong Kong might move to the UK this year (~322k in total over the next 5 years [source]).
Are they particularly well placed to work on Sino-Western relations? (Because they're better at bridging cultural (and linguistic) gap and are likely highly determined). Should we prioritize helping them somehow?
Hong Kong linkup is a organisation for Brits to help their HK peers settle in. If you'd like a way to get to know the community of new HK immigrants, it's probably a good option. I've signed up already. https://www.hklinkup.uk/
I would have thought they would be unusually badly placed, because the regime will view them as traitors, for the same reason I would not recommend using apostates for outreach to muslims.
That was precisely my point actually—just like Hirsi Ali might be well-placed to advocate for women's rights within Islam, people from Hong Kong might be well placed to highlight e.g. human rights issues in China.
Ahh, in that case I agree that HKers, or even better Uighurs, would be well placed. But my impression was that 80k etc.'s concerns about China mainly revolved around things like improving Western-Chinese coordination to reduce the risk of war, AI race or climate change, rather than human rights. I would think that putting pressure on them for human rights abuses would be likely to make this worse, as the CCP views such activism as an attack on their system. It is hard to cooperate with someone if they are denouncing you as evil and funding your dissidents.
Working on human rights were just an example, because of the comparison you raised, it could also be CSET type work.
A draft of Eric Schwitzgebel's new book 'The Weirdness of the World' from October 26, 2021 with a few EA-relevant themes:
1 In Praise of Weirdness
2 If Materialism Is True, the United States Is Probably Conscious
3 Universal Bizarreness and Universal Dubiety
4 1% Skepticism
5 Kant Meets Cyberpunk
6 An Innocent and Wonderful Definition of Consciousness
7 Experimental Evidence for the Existence of an External World
8 The Loose Friendship of Visual Experience and Reality
9 Is There Something It’s Like to Be a Garden Snail? Or: How Sparse or Abundant Is Consciousness in the Universe?
10 The Moral Status of Future Artificial Intelligence: Doubts and a Dilemma
11 Weirdness and Wonder
Quote:
"1. What I Will Argue in This Book.
Consider three huge questions: What is the fundamental structure of the cosmos? How does human consciousness fit into it? What should we value? What I will argue in this book – with emphasis on the first two questions, but also sometimes drawing implications for the third – is (1.) the answers are currently beyond our capacity to know, and (2.) we do nonetheless know at least this: Whatever the truth is, it’s weird. Careful reflection will reveal all of the viable theories on these grand topics to be both bizarre and dubious. In Chapter 3 (“Universal Bizarreness and Universal Dubiety”), I will call this the Universal Bizarreness thesis and the Universal Dubiety thesis. Something that seems almost too crazy to believe must be true, but we can’t resolve which of the various crazy-seeming options is ultimately correct. If you’ve ever wondered why every wide-ranging, foundations-minded philosopher in the history of Earth has held bizarre metaphysical or cosmological views (each philosopher holding, seemingly, a different set of bizarre views), Chapter 3 offers an explanation. I will argue that given our weak epistemic position, our best big-picture cosmology and our best theories of consciousness are tentative, modish, and strange. Strange: As I will argue, every approach to cosmology and consciousness has bizarre implications that run strikingly contrary to mainstream “common sense”. Tentative: As I will also argue, epistemic caution is warranted, partly because theories on these topics run so strikingly contrary to common sense and also partly because they test the limits of scientific inquiry. Indeed, dubious assumptions about the fundamental structure of mind and world frame or undergird our understanding of the nature and value of scientific inquiry, as I discuss in Chapters 4 (“1% Skepticism”), 5 (“Kant Meets Cyberpunk”), and 7 (“Experimental Evidence for the Existence of an External World”)
Modish: On a philosopher’s time scale – where a few decades ago is “recent” and a few decades hence is “soon” – we live in a time of change, with cosmological theories and theories of consciousness rising and receding based mainly on broad promise and what captures researchers’ imaginations. We ought not trust that the current range of mainstream academic theories will closely resemble the range in a hundred years, much less the actual truth. Even the common garden snail defies us (Chapter 9, “Is There Something It’s Like to Be a Garden Snail?”). Does it have experiences? If so, how much and of what kind? In general, how sparse or abundant is consciousness in the universe? Is consciousness – feelings and experiences of at least the simplest, least reflective kind – cheap and common, maybe even ubiquitous? Or is consciousness rare and expensive, requiring very specific conditions in the most sophisticated organisms? Our best scientific and philosophical theories conflict sharply on these questions, spanning a huge range of possible answers, with no foreseeable resolution. The question of consciousness in near-future computers or robots similarly defies resolution, but with arguably more troubling consequences: If constructions of ours might someday possess humanlike emotions and experiences, that creates moral quandaries and puzzle cases for which our ethical intuitions and theories are unprepared. In a century, the best ethical theories of 2022 might seem as quaint and inadequate as medieval physics applied to relativistic rocketships (Chapter 10, “The Moral Status of Future Artificial Intelligence: Doubts and a Dilemma”)."
I created a Zapier to post Pablo's ea.news feed of EA blogs and website to this subreddit:
https://reddit.com/r/eackernews
I wonder how much demand there'd be for a 'Hackernews' style high-frequency link only subreddit. I feel there's too much of a barrier to post links on the EA forum. Thoughts?
How can you get that new toggle feature / use collapsible content as in this post?
Hi Hauke. Sadly that's an admin-only feature that involves editing raw HTML.[1] We use it for Holden's posts because he's crossposted them from his own blog where he uses them. We have talked about adding Forum-native collapsable sections — I'll take your question as an endorsement.
There are multiple reasons this can't be opened up to all uses. The first albeit surmountable one is that it is relatively easy to add cross site scripting vulnerabilities when editing raw HTML.
"RCTs in the Field of Development - A Critical Perspective" is a book that was recently published. Description below.
We cite one the chapters extensively in our "Growth and the case against randomista development" piece.
The individual chapters seem all available for free in preprint version (e.g. http://ftp.iza.org/dp12882.pdf ).
---
In October 2019, Abhijit Banerjee, Esther Duflo, and Michael Kremer jointly won the 51st Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel "for their experimental approach to alleviating global poverty." But what is the exact scope of their experimental method, known as randomized control trials (RCTs)? Which sorts of questions are RCTs able to address and which do they fail to answer? The first of its kind, Randomized Control Trials in the Field of Development: A Critical Perspective provides answers to these questions, explaining how RCTs work, what they can achieve, why they sometimes fail, how they can be improved and why other methods are both useful and necessary. Bringing together leading specialists in the field from a range of backgrounds and disciplines (economics, econometrics, mathematics, statistics, political economy, socioeconomics, anthropology, philosophy, global health, epidemiology, and medicine), it presents a full and coherent picture of the main strengths and weaknesses of RCTs in the field of development. Looking beyond the epistemological, political, and ethical differences underlying many of the disagreements surrounding RCTs, it explores the implementation of RCTs on the ground, outside of their ideal theoretical conditions and reveals some unsuspected uses and effects, their disruptive potential, but also their political uses. The contributions uncover the implicit worldview that many RCTs draw on and disseminate, and probe the gap between the method's narrow scope and its success, while also proposing improvements and alternatives.
Without disputing the contribution of RCTs to scientific knowledge, Randomized Control Trials in the Field of Development warns against the potential dangers of their excessive use, arguing that the best use for RCTs is not necessarily that which immediately springs to mind. Written in plain language, this book offers experts and laypeople alike a unique opportunity to come to an informed and reasoned judgement on RCTs and what they can bring to development.
Table of Contents
General Introduction, Florent Bédécarrats, Isabelle Guérin, and François Roubaud
0:Randomization in the Tropics Revisited: A Theme and Eleven Variations, Sir Angus Deaton
1:Should the Randomistas (Continue to) Rule?, Martin Ravallion
2:Randomizing Development: Method or Madness?, Lant Pritchett
3:The Disruptive Power of RCTs, Jonathan Morduch
4:RCTs in Development Economics, Their Critics, and Their Evolution, Timothy Ogden
5:Reducing the Knowledge Gap in Global Health Delivery: Contributions and Limitations of Randomized Controlled trials, Andres Garchitorena, Megan Murray, Bethany Hedt-Gauthier, Paul Farmer, and Matthew Bonds
6:Trials and Tribulations: The Rise and Fall of the RCT in the WASH Sector, Dean Spears, Radu Ban, and Oliver Cumming
7:Microfinance RCTs in Development: Miracle or Mirage?, Florent Bédécarrats, Isabelle Guérin, and François Roubaud
8:The Rhetorical Superiority of Poor Economics, Agnès Labrousse
9:Are the 'Randomistas' Evaluators?, Robert Picciotto
10:Ethics of RCTs: Should Economists Care about Equipoise?, Michel Abramowicz and Ariane Szafarz
11:Using Priors in Experimental Design: How Much Are We Leaving on the Table?, Eva Vivalt
12:Epilogue: Randomization and Social Policy Evaluation Revisited, James J. Heckman
Interviews
"star-manning is to not only engage with the most charitable version of your opponent’s argument, but also with the most charitable version of your opponent, by acknowledging their good intentions and your shared desires despite your disagreements. In our UBI example, star-manning would be to amend the steel man with something like, “…and you’re in favor of this because you think it will help people lead safer, freer, and more fulfilled lives—which we both want.” If used properly, star-manning can serve as an inoculant against our venomous discourse and a method for planting disputes on common ground rather than a fault line."
https://centerforinquiry.org/blog/how-to-star-man-arguing-from-compassion/
The IGM booth survey of economists seems to suggests that there might be an recession next year in the US.
In October 2018, Bostrom published German translations of a compilation of the following of his papers under “The Future of Humanity”:
Maybe someone could turn this into a sequence?
Some other good papers by him:
The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement
Where are they? Why I hope the search for extraterrestrial life finds nothing
The Evolutionary Optimality Challenge
The question I'd have about "human enhancement" with technology, is what is one's hard limit to moral goodness, and thus one's "fatedness to the evilness of relative privation of goodness as compared to another" given that we have very little such technology at present, and how can one reliably determine it?
Gwern.net articles with an importance score of 9 or 10
Ideas for forum:
If we had a tag called "Links" for posts that aren't displayed on the front page, then we could have a "Hackernews"/ "Reddit" style section were people can share -without comment- external links related to EA or that could be discussed in the context of EA. This would be different from current "link posts" which might have a higher (imagined) bar to posting.
Along a similar lines, there could be a low effort way for the current Shortform function to emulate Twitter, where the 'magic' sorting algorithm also takes into account the length of the post.
I watched Bill Gates Netflix documentary and wrote down some rough critical thoughts
Likelihood of nuclear winter
Two recent 80k podcasts [1, 2] deal with nuclear winter (EA wiki link). One episode discusses bias in nuclear winter research (link to section in transcript). The modern case for nuclear winter is based on modelling by Robock, Toon, et al. (e.g. see them being acknowledged here). Some researchers have criticized them, suggesting the nuclear winter hypothesis is implausible and that the research is biased and has been instrumentalized for political reasons (e.g. paper, paper, citation trail of recent modelling work out of Los Alamos National Labs, which couldn’t replicate the nuclear winter effect). One recent paper summarizes the disagreements between the different modelling camps. Another paper suggests that nuclear war might also damage the ozone layer.
Related: New audible of 'Hacking the Bomb' on cyber nuclear security.
The Global Catastrophic Risk Management Act of 2022 is a new bipartisan bill that was proposed recently and is going to be voted on in the US. There's another bill on WMDs.
Inspired by Alex Berger talking about donating a kidney on the 80k podcast.
Could one increase kidney donations by subsidizing surgical excess fat removal for donors?
One might be able to remove fat and donate a kidney in one procedure.
Maybe this would raise fewer bioethical objections and make this more tractable.
Giving a kidney in exchange for money sounds less "exploitative" (I don't think it is, I am modeling public perception) than providing a kidney in exchange for beauty surgery because expectations are more explicit for money than a beauty surgery.
I think this is basically because
[Years of life lost due to C19]
A recent meta-analysis looks at C-19-related mortality by age groups in Europe and finds the following age distribution:
< 40: 0.1%
40-69: 12.8%
≥ 70: 84.8%
In this spreadsheet model I combine this data with Metaculus predictions to get at the years of life lost (YLLs) due to C19.
I find C19 might cause 6m - 87m YYLs (highly dependending on # of deaths). For comparison, substance abuse causes 13m, diarrhea causes 85m YYLs.
Countries often spend 1-3x GDP per capita to avert a DALY, and so the world might want to spend $2-8trn to avert C19 YYLs (could also be a rough proxy for the cost of C19).
One of the many simplifying assumptions of this model is that excludes disability caused by C19 - which might be severe.
Are the diarrhea and substance abuse numbers annualized? (does diarrhea cost 85 m YLL/yr)
My brain dump "Potential priority areas within cognitive sciences (psychology, neuroscience, and philosophy of mind)"
https://docs.google.com/document/d/12m_KDzKWfwQebrHGN4G4XCVIYPUHrm05Z6SB_loZr5w/edit
Feel free to contribute by making suggested edits!