This is a special post for quick takes by Mo Putera. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Cost-effectiveness in DALYs per $1k (90% CI) / % of simulation results with positive outcomes - negative outcomes - no effects / alternative weightings of cost-eff under different risk aversion profiles and weighting schemes in weighted DALYs per $1k, min to max values
Portfolio of biorisk projects ($15-30M budget, 60% chance no effect, 70% effect is positive): 132 (middle 99.9% of expected utility is 0) / >99.9% no effect / risk 0 - 132
Nanotech safety megaproject ($10-30M budget, 90% chance no effect, 70% effect is positive): 73 (middle 99.9% of EU is 0) / >99.9% no effect / risk -10 - 73
AI misalignment megaproject ($8-28B budget, 97.3% chance no effect, 70% effect is positive): 154 (middle 99.9% of EU is 27, 99% is 0) / >99.6% no effect / risk -56 - 154
Some things that jumped out at me (caveating that I don't work in any of these areas):
I'm a little surprised that only chicken campaigns are modeled as clearly higher EV (OOM-wise) than GHD interventions considered good by GW & OP's lights, while interventions for other nonhuman animals fall short
I'm also surprised that chickens > all other nonhuman animals on both EV and p(+ve simulation outcome). There's some discussion that seems to indicate that cage-free work seems to be much lower EV now than previously, although I'm not sure if it changes the takeaway (and in any case funding prioritization shouldn't be purely EV-based)
I'm surprised yet again that a >$10B AI misalignment megaproject is modeled as having no effect in >99.6% of simuls. I probably hadn't internalized the 'hits' in 'hits-based giving' as well as I should, since my earlier gut intuition (based on no data whatsoever) was that a near-Manhattan-scale megaproject would surely have some effect in >10% of possible worlds
I didn't expect the model to say chickens > misaligned AI, unsafe nanotech and biorisk from a risk-neutral EV perspective. That said, the x-risk inputs are in some sense just placeholders, so I don't put much weight in this
In any case, I'd be curious to see how the CCM is taken into consideration by funders and other stakeholders going forward.
I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine):
Bostrom’s discussion of infinite ethics is premised on the moral relevance of physically inaccessible value. That is, it assumes that aggregative utilitarianism is over the full universe, rather than the accessible universe. This requires certain assumptions about the universe, as well as being premised on a variant of the incomparability argument that we dismissed above, but has an additional response which is possible, presaged earlier. Namely, we can argue that this does not pose a problem for ethical decision-making even using aggregative ethics, because the consequences of any ethical decision can have only a finite (difference in) value. This is because the value of a moral decision relates only to the impact of that decision. Anything outside of the influenced universe is not affected, and the arguments above show that the difference any decision makes is finite.
I first read their paper a few years ago and found their arguments for the finiteness of value persuasive, as well as their collectively-exhaustive responses in section 4 to possible objections. So ever since then I've been admittedly confused by claims that the problems of infinite ethics still warrant concern w.r.t. ethical decision-making (e.g. I don't really buy Joe Carlsmith's arguments for acknowledging that infinities matter in this context, same for Toby Ord's discussion in a recent 80K podcast). What am I missing?
Rob Wiblin: OK, so the argument is something like valuing is a process that requires information to be encoded, and information to be processed — and there are just maximum limits on how much information can be encoded and processed given a particular amount of mass and given a finite amount of mass and energy. So that ultimately is going to set the limit on how much valuing can be done physically in our universe. No matter what things we create, no matter what minds we generate, there’s going to be some finite limit there. That’s basically it?
Anders Sandberg: That’s it. In some sense, this is kind of trivial. I think some readers would no doubt feel almost cheated, because they wanted to know that metaphysical limit for value, and we can’t say anything about that. But it seems very likely that if value has to have to do with some entity that is doing the valuing, then there is always going to be this limit — especially since the universe is inconveniently organised in such a way that we can’t get hold of infinite computational power, as far as we know.
I just learned about Tom Frieden via Vadim Albinsky's writeup Resolve to Save Lives Trans Fat Program for Founders Pledge. His impact in sheer lives saved is astounding, and I'm embarrassed I didn't know about him before:
The CEO of RTSL, Tom Frieden, likely prevented tens of millions of deaths by creating an international tobacco control initiative in a prior role that may have been much more cost effective than most of our top recommended charities. ...
We believe that by leveraging his influence with governments, and the relatively low cost of advocating for regulations to improve health, Tom Frieden has the potential to again save a vast number of lives at a low cost.
How many more? Albinsky estimates:
RTSL is aiming to save 94 million lives over 25 years by advocating for countries to implement policies to reduce non-communicable diseases. We believe the industrially-produced trans fat elimination program is the most cost-effective of their initiatives. ... Even after very conservative discounts to RTLS’s impact projections we estimate this program to be more cost effective than most of our top global health and development recommendations.
Tangentially, if a "Borlaug" is a billion lives saved, then Frieden's impact is probably on the scale of ~100 milliBorlaugs (to nearest OOM). Bill and Melinda likely have had similar impact. This makes me wonder who else I don't know about who's done ~100 milliBorlaugs of good.
(It's arguably unfair to wholly attribute all those lives saved to Frieden, and I am honestly unsure what credit attribution makes most sense, but applying the same logic to Borlaug you can no longer really say he saved a billion lives.)
Some notes from trying out Rethink Priorities' new cross-cause cost-effectiveness model (CCM) from their post, for personal reference:
Cost-effectiveness in DALYs per $1k (90% CI) / % of simulation results with positive outcomes - negative outcomes - no effects / alternative weightings of cost-eff under different risk aversion profiles and weighting schemes in weighted DALYs per $1k, min to max values
In any case, I'd be curious to see how the CCM is taken into consideration by funders and other stakeholders going forward.
I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine):
I first read their paper a few years ago and found their arguments for the finiteness of value persuasive, as well as their collectively-exhaustive responses in section 4 to possible objections. So ever since then I've been admittedly confused by claims that the problems of infinite ethics still warrant concern w.r.t. ethical decision-making (e.g. I don't really buy Joe Carlsmith's arguments for acknowledging that infinities matter in this context, same for Toby Ord's discussion in a recent 80K podcast). What am I missing?
Sandberg's recent 80K podcast interview transcript has this quote:
I just learned about Tom Frieden via Vadim Albinsky's writeup Resolve to Save Lives Trans Fat Program for Founders Pledge. His impact in sheer lives saved is astounding, and I'm embarrassed I didn't know about him before:
How many more? Albinsky estimates:
Tangentially, if a "Borlaug" is a billion lives saved, then Frieden's impact is probably on the scale of ~100 milliBorlaugs (to nearest OOM). Bill and Melinda likely have had similar impact. This makes me wonder who else I don't know about who's done ~100 milliBorlaugs of good.
(It's arguably unfair to wholly attribute all those lives saved to Frieden, and I am honestly unsure what credit attribution makes most sense, but applying the same logic to Borlaug you can no longer really say he saved a billion lives.)