TK

Thomas Kwa

Researcher @ MATS/Independent
3071 karmaJoined Feb 2020Working (0-5 years)Berkeley, CA, USA

Bio

Participation
4

Mechinterp researcher under Adrià Garriga-Alonso.

Comments
249

[Warning: long comment] Thanks for the pushback. I think converting to lives is good in other cases, especially if it's (a) useful for judging effectiveness, and (b) not used as a misleading rhetorical device [1].

The basic point I want to make is that all interventions have to pencil out. When donating, we are trying to maximize the good we create, not decide which superficially sounds better between the different strategies "empower beneficiaries to invest in their communities' infrastructure" and "use RCTs to choose lifesaving interventions" [2]. Lives are at stake, and I don't think those lives are less important simply because it's harder to put names and faces to the ~60 lives that were saved from a 0.04% chance of reduction of malaria deaths from a malaria net. Of course this applies equally to the Wytham Abbey purchase or anything else. But to point (a), we actually can compare the welfare gain from 61 lives saved to the economic security produced by this project. GiveWell has weights for doubling of consumption, partly based on interviews from Africans [3]. With other projects, this might be intractable due to entirely different cause areas or different moral preferences e.g. longtermism.

Imagine that we have a cost-effectiveness analysis made by a person with knowledge of local conditions and local moral preferences, domain expertise in East African agricultural markets, and the quantitative expertise of GiveWell analysts. If it comes out that one intervention is 5 or 10 times better than the other, as is very common, we need a very compelling reason why some consideration was missed to justify funding the other one. Compare this to our currently almost complete state of ignorance as to the value of building this plant, and you see the value of numbers. We might not get a CEA this good, but we should get close as we have all the pieces.

As to point (b), I am largely pro making these comparisons in most cases just to remind people of the value of our resources. But I feel like the Wytham and HPMOR cases, depending on phrasing, could exploit peoples' tendency to think of projects that save lives in emotionally salient ways as better than projects that save lives via less direct methods. It will always sound bad to say that intervention A is funded rather than saving X lives, and we should generally not shut down discussion of A by creating indignation. This kind of misleading rhetoric is not at all my intention; we all understand that allowing a large enough number of farmers access to sorghum markets can produce more welfare than preventing 61 deaths from malaria. We have the choice between saving 61 of someones' sons and daughters, and allowing X extremely poor people to perhaps buy metal roofs, send their children to school, and generally have some chance of escaping a millennia-long poverty trap. We should think: "I really want to know how large X is".

[1] and maybe (c) not bad for your mental health?

[2] Unless you believe empowering people is inherently better regardless of the relative cost, which I strongly disagree with.

[3] This is important-- Westerners may be biased here because we place different values on life compared to doubling consumption. But these interviews were from Kenya and Ghana, so maybe Uganda's weights slightly differ.

Just to remind everyone, 339,000 GBP in malaria nets is estimated by GiveWell to save around 61 lives, mostly young children. Therefore a 25% difference in effectiveness either way is 15 lives. A cost-effectiveness analysis is definitely required given what is at stake, even if the complexities of this project mean it is not taken as final.

Thanks. In addition to lots of general information about FTX, this helps answer some of my questions about FTX: it seems likely that FTX/Alameda were never massively profitable except for large bets on unsellable assets (anyone have better information on this?); even though they had large revenues maybe much of it was spent dubiously by SBF. And the various actions needed to maintain a web of lies indicate that Caroline Ellison and Nishad Singh, and very likely Gary Wang and Sam Trabucco (who dropped off the face of the earth at the time of the bankruptcy [1]) were definitely complicit in fraud severe and obvious enough that any moral person, (possibly even a hardcore utilitarian, if it was true that FTX was consistently losing money), should have quit or leaked evidence of said fraud.

Four or five people is very different from a single bad actor, and this almost confirms for me that FTX belongs on the list of ways EA and rationalist organizations can basically go insane in harmful ways, alongside Leverage, Zizians and possibly others. It is not clear that FTX experienced a specifically EA failure mode, rather than the very common one in which power corrupts.

I think someone should do an investigation much wider in scope than what happened at FTX, covering the entire causal chain from SBF first talking to EAs at MIT to the damage done to EA. Here are some questions I'm particularly curious about:

  • Did SBF show signs of dishonesty early on at MIT? If so, why did he not have a negative reputation among the EAs there?
  • To what extent did EA "create SBF"-- influence the values of SBF and others at FTX? Could a version of EA that placed more emphasis on integrity, diminishing returns to altruistic donations, or something else have prevented FTX?
  • Alameda was started by various traders from Jane Street, especially EAs. Did they do this despite concerns about how the company would be run, and were they correct to leave at the time?
  • [edited to add] I have heard that Tara Mac Aulay and others left Alameda in 2018. Mac Aulay claims this was "in part due to concerns over risk management and business ethics". Do they get a bunch of points for this? Why did this warning not spread, and can we even spread such warnings without overloading the community with gossip even more than it is?
  • Were Alameda/FTX ever highly profitable controlling for the price of crypto? (edit: this is not obvious; it could be that FTX's market share was due to artificially tight spreads due to money-losing trades from Alameda). How should we update on the overall competence of companies with lots of EAs?
  • SBF believed in linear returns to altruistic donations (I think he said this on the 80k podcast), unlike most EAs. Did this cause him to take on undue risk, or would fraud have happened if FTX had a view on altruistic returns similar to that of OP or SFF but linear moral views?
  • What is the cause of the exceptionally poor media perception of EA after FTX? When i search for "effective altruism news", around 90% of articles I could find negative and none positive, including many with extremely negative opinions unrelated to FTX. One would expect at least some article saying "Here's why donating to effective causes is still good". (In no way do I want to diminish the harms done to customers whose money was gambled away, but it seems prudent to investigate the harms to EA per se)

My guess is that this hasn't been done simply because it's a lot of work (perhaps 100 interviews and one person-year of work), no one thinks it's their job, and conducting such an investigation would somewhat entail someone both speaking for the entire EA movement and criticizing powerful people and organizations.

See also: Ryan Carey's comment

2-year update on infant outreach

To our knowledge, there have been no significant infant outreach efforts in the past two years. We are deeply saddened by this development, because by now there could have been two full generations of babies, including community builders who would go on to attract even more talent. However, one silver lining is that no large-scale financial fraud has been committed by EA infants.

We think the importance of infant outreach is higher than ever, and still largely endorse this post. However, given FTX events, there are a few changes we would make, including a decreased focus on ambition and especially some way to select against sociopathic and risk-seeking infants. We tentatively propose that future programs favor infants who share their toys, are wary of infants who take others' toys without giving them back, and never support infants who, when playing with blocks, try to construct tall towers that have high risk of collapse.

This post is important and I agree with almost everything it says, but I do want to nitpick one crucial sentence:

There may well come a day when humanity would tear apart a thousand suns in order to prevent a single untimely death.

I think it is unlikely that we should ever pay the price of a thousand suns to prevent one death, because tradeoffs will always exist. The same resources used to prevent that death could support trillions upon trillions of sentient beings at utopic living standards for billions of years, either biologically or in simulation. The only circumstances where I think such a decision would be acceptable are things like

  • The "person" we're trying to save is actually a single astronomically vast hivemind/AI/etc that runs on a star-sized computer and is worth that many resources.
  • Our moral views at the time dictate that preventing one death now is at least fifteen orders of magnitude worse than extending another being's life by a billion years.
  • The action is symbolic, like how in The Martian billions of dollars were spent to save Mark Watney, rather than driven by cause prioritization.

Otherwise, we are always in triage and always will be, and while prices may fluctuate, we will never be rich enough to get everything we want.

My study of the monkeys and infants, i.e. my analysis of past wars, suggested an annual extinction risk from wars of 6.36*10^-14, which is still 1.07 % (= 5.93*10^-12/(5.53*10^-10)) of my best guess.

The fact that one model of one process gives a low number doesn't mean the true number is within a couple orders of magnitude of that. Modeling mortgage-backed security risk in 2007 using a Gaussian copula gives an astronomically low estimate of something like 10^-200, even though they did in fact default and cause the financial crisis. If the bankers adjusted their estimate upward to 10^-198 it would still be wrong.

IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.

Don't have time to reply in depth, but here are some thoughts:

  • If a risk estimate is used for EA cause prio, it should be our betting odds / subjectie probabilities, that is, average over our epistemic uncertainty. If from our point of view a risk is 10% likely to be >0.001%, and 90% likely to be ~0%, this lower bounds our betting odds at 0.0001%. It doesn't matter that it's more likely to be 0%.
  • Statistics of human height are much better understood than nuclear war because we have billions of humans but no nuclear wars. The situation is more analogous to finding the probability of a 10 meter tall adult human having only ever observed a few thousand monkeys (conventional wars), plus one human infant (WWII) and also knowing that every few individuals humans mutate into an entirely new species (technological progress).
  • It would be difficult to create a model suggesting a much higher risk because most of the risk comes from black swan events. Maybe one could upper bound the probability by considering huge numbers of possible mechanisms for extinction and ruling them out, but I don't see how you could get anywhere near 10^-12.

Any probability as low as 5.93*10^-12 about something as difficult to model as the effects of nuclear war on human society seems extremely overconfident to me. Can you really make 1/5.93*10^-12 (170 billion) predictions about independent topics and expect to be wrong only once? Are you 99.99% [edit: fixed this number] sure that there is no unmodeled set of conditions under which civilizational collapse occurs quickly, which a nuclear war is at least 0.001% likely to cause? I think the minimum probabilities that one should have given these considerations is not much lower than the superforecasters' numbers.

Load more