Thanks for the follow up, Matthew! Strongly upvoted.
My best guess is also that additional GHG emissions are bad for wild animals, but it has very low resilience, so I do not want to advocate for conservationism. My views on the badness of the factory-farming of birds are much more resilient, so I am happy with people switching from poultry to beef, although I would rather have them switch to plant-based alternatives. Personally, I have been eating plant-based for 5 years.
Moreover, as Clare Palmer argues
Just flagging this link seems broken.
...I think you have
Nice points, Matthew!
(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.
I have now clarified my estimate of the harms of GHG emissions only accounts for humans. I have also added:
...I estimated the scale of the welfare of wild animals is 4.21 M times that of farmed animals. Nonetheless, I have neglected the impact of GHG emissions on wild animals due to their high uncertainty. According to Brian Tomasik:
“On balance, I’m extremely uncertain about the net impact of climate change on
Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context?
For example, if one was comparing wars involding 10 k or 10 M deaths, the latter would be more likely to involve multiple great power, in which case it would make more sense to improve relationships between NATO, China and Russia.
...Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going
Thanks for tagging me, Johannes! I have not read the post, but in my mind one should overwhelmingly focus on minimising animal suffering in the context of food consumption. I estimate the harm caused to farmed animals by the annual food consumption of a random person is 159 times that caused to humans by their annual GHG emissions.
Fig. 4 of Kuruc 2023 is relevant to the question. A welfare weight of 0.05 means that one values 0.05 units of welfare in humans as much as 1 unit of welfare in animals, and it would still require a social cost of carbon of over ...
Vasco, I've read your post to which the first link leads quickly, so please correct me if I'm wrong. However, it left me wondering about two things:
(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans. The references to DALYs and 'climate change affecting more people with lower income' lead me to suspect you're not. But non-humans will surely be the vast majority of the victims of global heating--as well as, in some cases, its beneficiaries. While Timothy Chan is quite right to point ...
Thanks for the comment, Stan!
Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.
Technically speaking, the way I modelled the cost-effectiveness:
Using the CDF makes sense for the former, but the PDF is adequate for the latter.
...You show that preventing (say) all potential wars next year with a death tol
By "pre- and post-catastrophe population", I meant the population at the start and end of a period of 1 year, which I now also refer to as the initial and final population.
I guess you are thinking that the period of 1 year I mention above is one over which there is a catastrophe, i.e. a large reduction in population. However, I meant a random unconditioned year. I have now updated "period of 1 year" to "any period of 1 year (e.g. a calendar year)". Population has been growing, so my ratio between the initial and final population will have a high chance of being lower than 1.
Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.
Hi @MichaelStJules, I am tagging you because I have updated the following sentence. If there is a period longer than 1 year over which population decreases, the power laws describing the ratio between the initial and final population of each of the years following the 1st could have diff...
I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and "new technology" could totally facilitate that.
To clarify, my estimates are supposed to account for unknown unknowns. Otherwise, they would be any orders of magnitude lower....
Thanks for the comment, David! I agree all those effects could be relevant. Accordingly, I assume that saving a life in catastrophes (periods over which there is a large reduction in population) is more valuable than saving a life in normal times (periods over which there is a minor increase in population). However, it looks like the probability of large population losses is sufficiently low to offset this, such that saving lives in normal time is more valuable in expectation.
Thanks for clarifying! I agree B) makes sense, and I am supposed to be doing B) in my post. I calculated the expected value density of the cost-effectiveness of saving a life from the product between:
if you're primarily trying to model effects on extinction risk
I am not necessarily trying to do this. I intended to model the overall effect of saving lives, and I have the intuition that saving a life in a catastrophe (period over which there is a large reduction in population) conditional on it happening is more valuable than saving a life in normal times, so I assumed the value of saving a life increases with the severity of the catastrophe. One can assume preventing extinction is specially important by selecting a higher value for ("the el...
Thanks for the critique, Owen! I strongly upvoted it.
I'm worried that modelling the tail risk here as a power law is doing a lot of work, since it's an assumption which makes the risk of very large events quite small (especially since you're taking a power law in the ratio
Assuming the PDF of the ratio between the initial and final population follows a loguniform distribution (instead of a power law), the expected value density of the cost-effectiveness of saving a life would be constant, i.e. it would not depend on the severity of the catastrophe. However,...
I'm confused by some of the set-up here. When considering catastrophes, your "cost to save a life" represents the cost to save that life conditional on the catastrophe being due to occur? (I'm not saying "conditional on occurring" because presumably you're allowed interventions which try to avert the catastrophe.)
My language was confusing. By "pre- and post-catastrophe population", I meant the population at the start and end of a period of 1 year, which I now also refer to as the initial and final population. I have now clarified this in the post.
I assume ...
Thanks for all your comments, Owen!
That paper was explicitly considering strategies for reducing the risk of human extinction.
My expected value density of the cost-effectiveness of saving a life, which decreases as catastrophe severity increases, is supposed to account for longterm effects like decreasing the risk of human extinction.
Thanks for the comment, Michael!
Also, to be clear, this is supposed to be ~immediately pre-catastrophe and ~immediately post-catastrophe, right? (Catastrophes can probably take time, but presumably we can still define pre- and post-catastrophe periods.)
I have updated the post changing "pre- and post-catastrophe population" to "population at the start and end of a period of 1 year", which I now also refer to as the initial and final population.
You're modelling the cost-effectiveness of saving a life conditional on catastrophe here, right?
No. It is supposed ...
Hello again Lizka,
When you’re voting, don't do the following:
- “Mass voting” on many instances of a user’s content simply because it belongs to that user
- Using multiple accounts to vote on the same post or comment
We will almost certainly ban users if we discover that they've done one of these things.
Relatedly, I was warned a few days ago that the moderation system notified the EA Forum team that I had voted on another user's comments with concerningly high frequency. I wonder whether this may be a false positive for 2 reasons:
Hi Lizka,
Have you considered running a survey to get a better sense of the voting norms users are following?
To be clear: what I'm interested in here is human extinction (not any broader conception of "existential catastrophe"), and the bet is about that.
Agreed.
On the question of priors, I liked AGI Catastrophe and Takeover: Some Reference Class-Based Priors. It is unclear to me whether extinction risk has increased in the last 100 years. I estimated an annual nuclear extinction risk of 5.93*10^-12, which is way lower than the prior for wild mammals of 10^-6.
Would be interested to see your reasoning for this, if you have it laid out somewhere.
I have not engaged so much with AI risk, but my views about it are informed by considerations in the 2 comments in this thread. Mammal species usually last 1 M years, and I am not convinced by arguments for extinction risk being much higher (I would like to see a detailed quantitative model), so I start from a prior of 10^-6 extinction risk per year. Then I guess the risk is around 10 % as high as that because humans currently have tight control of AI development.
...Is it ma
Thanks! Could you also clarify where is your house, whether you live there or elsewhere, and how much cash you expect to have by the end of 2027 (feel free to share the 5th percentile, median and 95th percentile)?
Thanks for following up, Greg! Strongly upvoted. I will try to understand how I can set up a contract describing the bet with your house as collateral.
Could you link to the post on X you mentioned?
I will send you a private message with Bryan's email.
Definitely seek legal advice in the country and subdivision (e.g., US state) where Greg lives!
You may think of this as a bet, but I'll propose an alternative possible paradigm: it's may be a plain old promissory note backed by a mortgage. That is, a home-equity loan with an unconditional balloon payment in five years. Don't all contracts in which one party must perform in the future include a necessarily implied clause that performance is not necessary in the event that the human race goes extinct by that time? At least, I don't plan on performing any of m...
Grantees are obviously welcome to do this.
Right, but they have not been doing it. So I assume EA Funds would have to at least encourage applicants to do it, or even make it a requirement for most applications. There can be confidential information in some applications, but, as you said below, applicants do not have to share everything in their public version.
That said, my guess is that this will make the forum less enjoyable/useful for the average reader, rather than more.
I guess the opposite, but I do not know. I am mostly in favour of experimenting with a few applications, and then deciding whether to stop or scale up.
We've started working on this [making some application public], but no promises. My guess is that making public the rejected applications is more valuable than accepted ones, eg on Manifund. Note that grantees also have the option to upload their applications as well (and there are less privacy concerns if grantees choose to reveal this information).
Manifund already has quite a good infrastructure for sharing grants. However, have you considered asking applicants to post a public version of their applications on EA Forum? People who prefer to remain anonym...
Nice discussion, Owen and titotal!
But it doesn't make sense to me to analogise it to a risk in putting up a sail.
I think this depends on the timeframe. Over a longer one, looking into the estimated destroyable area by nuclear weapons, nuclear risk looks like a transition risk (see graph below). In addition, I think the nuclear extinction risk has decreased even more than the destroyable area, since I believe greater wealth has made society more resilient to the effects of nuclear war and nuclear winter. For reference, I estimated the current annual nuclear...
Hi JP,
Minor. In the messages' page, the screen is currently broken down into 2, with my past conversations on the left, and the one I am focussing on on the right. I would rather have an option to expand the screen on the right such that I do not see the conversations pane on the left, or have an option to hide the conversations pane on the left.
If the point is donor oversight/evaluation/accountability, then I am hesitant to give the grantmakers too much information ex ante on which grants are very likely/unlikely to get the public writeup treatment.
Great point! I had not thought about that. On the other hand, I assume grantmakers are already spending more time on assessing larger grants. So I wonder whether the distribution of the granted amount is sufficiently heavy-tailed for grantmakers to be influenced to spend too much time on them due to their higher chance of being selected for having long...
Caleb and Linch randomly selected grants from each group.
I think your procedure to select the grants was great. However, would it become even better by making the probability of each grant being selected proportional to its size? In theory, donors should care about the impact per dollar (not impact per grant), which justifies weighting by grant size. This may matter because there is significant variation in grant size. The 5th and 95th percentile amount granted by LTFF are 2.00 k$ and 169 k$, so, specially if one is picking just a few grants as you did (as...
I'm late to the discussion, but I'm curious how much of the potential value would be unlocked -- at least for modest size / many grants orgs like EA Funds -- if we got a better writeup for a random ~10 percent of grants (with the selection of the ten percent happening after the grant decisions were made).
Great suggestion, Jason! I think that would be over 50 % as valuable as detailed write-ups for all grants.
Actually, the grants which were described in this post on the Long-Term Future Fund (LTFF) and this on the Effective Altruism Infrastructure Fund (EAI...
On the nitpick: After reflection, I'd go with a mixed approach (somewhere between even odds and weighted odds of selection). If the point is donor oversight/evaluation/accountability, then I am hesitant to give the grantmakers too much information ex ante on which grants are very likely/unlikely to get the public writeup treatment. You could do some sort of weighted stratified sampling, though.
I think grant size also comes into play on the detail level of the writeup. I don't think most people want more than a paragraph, maximum, on a $2K grant. I'd hope f...
Hi Elizabeth,
I think mentioning CE may have distracted from the main point I wanted to convey. 1 paragraph or sentence is not enough for the public to assess the cost-effectiveness of a grant.
I think downvoting comments like the above is harmful:
For what it's worth, I upvoted and disagree-voted, because I think I think you're wrong and because you clearly put thought and effort into your writing, and produced the sort of content I think we should generally have more of, even though I'm annoyed locally that "don't do either" is a much easier comment to write than "here's the analysis you asked for", leading to the only serious comments on the post being people stating your view.
Thanks for the analysis, Hauke! I strongly upvoted it.
The mean "CCEI's effect of shifting deploy$ to RD&D$" of 5 % you used in UseCarlo is 12.5 (= 0.05/0.004) times the mean of 0.4 % respecting your Guesstimate model. Which one do you stand by? Since you say "CCEI is part of a much smaller coalition of only hundreds of key movers and shakers", the smaller effect of 0.4 % (= 1/250) would be more appropriate assuming the same contribution for each member of such coalition.
I think you had better estimate the expected cost-effectiveness in t/$ instead...
Hi Saul,
I assume Open Philanthropy (OP) has built quantitative models which estimate GCR, but probably just simple ones, as I would expect a model like Tom's to be published. There may be concerns about information hazards in the context of bio risk, but OP had an approach to quantify it while mitigate them:
...A second, less risky approach is to abstract away most biological details and instead consider general ‘base rates’. The aim is to estimate the likelihood of a biological attack or accident using historical data and base rates of analogous scenarios, an
Thanks for the comment, Ryan. I agree that report by Joseph Carlsmith is quite detailed. However, I do not think it is sufficiently quantitative. In particular, the probabilities which are multiplied to obtain the chance of an existential catastrophe are directly guessed, as opposed to resulting from detailed modelling (in contrast to the AI takeoff speeds calculated in Tom's report). Joseph was mostly aiming to qualitatively describe the arguments, as opposed to quantifying the risk:
...My main hope, though, is not to push for a specific number, but rather to
Thanks for following up!
Cool!
I looked at the study and it's not about Belgian hospitals, so it doesn't really apply to me.
Even if there is no direct nearterm financial cost, you could plausibly use the time saved by not donating a kidney to generate at least 1.05 $? For example, I guess the cost to your parents would be higher than this, so they might be happy to donate a few dollars to THL for you not to donate a kidney. Even if not now, the time you save may also increase your income by more than 1.05 ...
Thanks for your willingness to contribute to a better world, Bob!
Have you considered not donating either of those, and instead support the best animal welfare interventions?
Hi Vasco,
I already do work for an animal welfare organization. I looked at the study and it's not about Belgian hospitals, so it doesn't really apply to me. Some of the listed costs aren't present (I don't have a wage so no wage loss), those that are present are mostly paid for by the state (travel, accommodation, medical...) and those that aren't are paid for by my parents (housework). The only one that applies is "Small cash payments for grocery items (eg, tissue paper)" which is negligible, so the expected DALY per dollar is extremely high.
In Belgium yo...
Thanks for sharing, Conrad!
This project was completed as part of contract work with Open Philanthropy
I wonder whether Open Philanthropy (OP) should have commisioned an analysis like yours much sooner. More importantly, I am a little confused about why OP would want to know how much is being spent on biosecurity & pandemic preparedness at this stage. Neglectedness may be a good heuristic to identify promising areas at an early stage, but OP has now granted 191 M$ to interventions in that area, according to their grants' database on 17 February 2024. So ...
(I do wonder if there's an effect where because we communicate our overall views so much, we become a more obvious/noticeable target to criticize.)
To be clear, the criticisms I make in the post and comments apply to all grantmakers I mentioned in the post except for CE.
Well, I haven't read CE's reports. Have you?
I have skimmed some, but the vast majority of my donations have been going to AI safety interventions (via LTFF). I may read CE's reports in more detail in the futute, as I have been moving away from AI safety to animal welfare as the most promisin...
Thanks for the post, Emre!
“We would never ask child abusers to commit less child abuse, so we can’t ask other people to reduce their animal product consumption. We must ask them to end it.”
I would ask whatever more cost-effectively decreases child abuse. If child abuse was as prevalent as the consumption of factory-farmed animals, I guess asking for a reduction of it, while simultaneously highlighting that the optimal amount of child abose is 0, would tend to be more cost-effective than just demanding the end of child abuse.
I assume there should be a portf...
Great post, Matthew! Misaligned AI not being clearly bad is one of the reasons why I have been moving away from AI safety to animal welfare as the most promising cause area. In my mind, advanced AI would ideally be aligned with expected total hedonistic utilitarianism.
Hello,
In the XPT, you ask about the probability of catastrophes where the fraction of the initial population which survives is 90 % (= 1 - 0.10) and 6*10^-7 (= 5*10^3/(8*10^9)). I think it would be good if you asked about intermediate fractions (e.g. 10 %, 1 %, ..., and 10^-7). I guess many forecasters are implicitly estimating their probabilities of extinction from population losses of 99 % to 99.99 %, whereas reaching a population of 5 k (as in your questions about extinction risk) would require a population loss of 99.99994 % (= 1 - 5*10^3/(8*10^9)), wh...
Hi Daniel,
In 2024, 4% of AI R&D tasks are automated; then 32% in 2026, and then singularity happens around when I expected, in mid 2028. This is close enough to what I had expected when I wrote the story that I'm tentatively making it canon.
Relatedly, what it your median time from now until human extinction? If it is only a few years, I would be happy to set up a bet like this one.
Thanks for the comment, Ezrah!
I'd be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)
Yes, I think career changes and additional effective donations would be better proxies for impact than outputs like quality-adjusted attendances and calls. Relatedly:
...Animal Advocacy Careers (AAC) ran two longitudinal studies aiming to compare and test the cost-effectiveness of our one-to-one advising calls and our online course. Various forms of these two types of careers advice service have been used by pe
Thanks for the detailed comment. I strongly upvoted it.
I don't think wordcount is a good way to measure information shared.
I don't think wordcount is a fair way to estimate (useful) information shared. I mean it's easy to write many thousands of words that are uninformative, especially in the age of LLMs. I think to estimate useful information shared, it's better to see how much people actually know about your work, and how accurate their beliefs are.
I agree the number of words per grant is far from an ideal proxy. At the same time, the median length...
Great post, titotal!
It looks like you meant to write something after this.
Relatedly, there is this post fr... (read more)