All of Vasco Grilo's Comments + Replies

Great post, titotal!

Why should a person focus on your issue in particular, rather than

It looks like you meant to write something after this.

One way to react to this would be to try and compensate for motivation gaps when evaluating the strengths of different arguments. Like, if you are evaluating a claim, and side A has a hundred full time advocates but side B doesn’t, don’t just weigh up current arguments, weigh up how strong side B’s position would be if they also had a hundred full time advocates. A tough mental exercise!

Relatedly, there is this post fr... (read more)

Thanks for the follow up, Matthew! Strongly upvoted.

My best guess is also that additional GHG emissions are bad for wild animals, but it has very low resilience, so I do not want to advocate for conservationism. My views on the badness of the factory-farming of birds are much more resilient, so I am happy with people switching from poultry to beef, although I would rather have them switch to plant-based alternatives. Personally, I have been eating plant-based for 5 years.

Moreover, as Clare Palmer argues

Just flagging this link seems broken.

I think you have

... (read more)
3
Matthew Rendall
19h
Thanks, Vasco! That's odd--the Clare Palmer link is working for me. It's her paper 'Does Nature Matter? The Place of the Nonhuman in the Ethics of Climate Change'--what looks like a page proof is posted on www.academia.edu. One of the arguments in my paper is that we're not morally obliged to do the expectably best thing of our own free will, even if we reliably can, when it would benefit others who will be much better off than we are whatever we do. So I think we disagree on that point. That said, I entirely endorse your argument about heuristics, and have argued elsewhere that even act utilitarians will do better if they reject extreme savings rates.

Nice points, Matthew!

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.

I have now clarified my estimate of the harms of GHG emissions only accounts for humans. I have also added:

estimated the scale of the welfare of wild animals is 4.21 M times that of farmed animals. Nonetheless, I have neglected the impact of GHG emissions on wild animals due to their high uncertainty. According to Brian Tomasik:

“On balance, I’m extremely uncertain about the net impact of climate change on

... (read more)
9
Matthew Rendall
1d
Thanks, Vasco! You are welcome to list me in the acknowledgements. I’m glad to have the reference to Tomasik’s post, which Timothy Chan also cited below, and appreciate the detailed response. That said, I doubt we should be agnostic on whether the overall effects of global heating on wild animals will be good or bad. The main upside of global heating for animal welfare, on Tomasik’s analysis, is that it could decrease wild animal populations, and thus wild animal suffering. On balance, in his view, the destruction of forests and coral reefs is a good thing. But that relies on the assumption that most wild animal lives are worse than nothing. Tomasik and others have given some powerful reasons to think this, but there are also strong arguments on the other side. Moreover, as Clare Palmer argues, global heating might increase wild animal numbers—and even Tomasik doesn’t seem sure it would decrease them. In contrast, the main downside, in Tomasik’s analysis, is less controversial: that global heating is going to cause a lot of suffering by destroying or changing the habitats to which wild animals are adapted. ‘An “unfavorable climate”’, notes Katie McShane, ‘is one where there isn’t enough to eat, where what kept you safe from predators and diseases in the past no longer works, where you are increasingly watching your offspring and fellow group members suffer and die, and where the scarcity of resources leads to increased conflict, destabilizing group structures and increasing violent confrontations.' Palmer isn’t so sure: ‘Even if some animals suffer and die, climate change might result in an overall net gain in pleasure, or preference satisfaction (for instance) in the context of sentient animals. This may be unlikely, but it’s not impossible.’ True. But even if it’s only unlikely that global heating’s effects will be good, it means that its effects on existing animals are bad in expectation. There’s another factor which Tomasik mentions in passing: there is some

Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context?

For example, if one was comparing wars involding 10 k or 10 M deaths, the latter would be more likely to involve multiple great power, in which case it would make more sense to improve relationships between NATO, China and Russia.

Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going

... (read more)

Thanks for tagging me, Johannes! I have not read the post, but in my mind one should overwhelmingly focus on minimising animal suffering in the context of food consumption. I estimate the harm caused to farmed animals by the annual food consumption of a random person is 159 times that caused to humans by their annual GHG emissions.

Fig. 4 of Kuruc 2023 is relevant to the question. A welfare weight of 0.05 means that one values 0.05 units of welfare in humans as much as 1 unit of welfare in animals, and it would still require a social cost of carbon of over ... (read more)

Vasco, I've read your post to which the first link leads quickly, so please correct me if I'm wrong. However, it left me wondering about two things:

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.  The references to DALYs and 'climate change affecting more people with lower income' lead me to suspect you're not. But non-humans will surely be the vast majority of the victims of global heating--as well as, in some cases, its beneficiaries. While Timothy Chan is quite right to point ... (read more)

Thanks for the comment, Stan!

Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.

Technically speaking, the way I modelled the cost-effectiveness:

  • I am not comparing the cost-effectiveness of preventing events of different magnitudes.
  • Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.

Using the CDF makes sense for the former, but the PDF is adequate for the latter.

You show that preventing (say) all potential wars next year with a death tol

... (read more)
3
Stan Pinsent
2d
Thanks for the detailed response, Vasco! Apologies in advance that this reply is slightly rushed and scattershot. I agree that you are right with the maths - it is 251x, not 63,000x. OK, I did not really get this! In your example on wars you say Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context?  I am guessing it is spending money now on things that would save lives in very deadly wars. Something like building a nuclear bunker vs making a bullet proof vest? Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong? When you are thinking about the PDF of PiPf, are you forgetting that ∇PiPf is not proportional to ∇Pf?  To give a toy example: suppose Pi=100.  Then if 90<pf<100 we have  1<PiPf<1.11 If 10<pf<20 we have  5<PiPf<10 The "height of the PDF graph" will not capture these differences in width. This won't matter much for questions of 100 vs 100k deaths, but it might be relevant for near-existential mortality levels.

By "pre- and post-catastrophe population", I meant the population at the start and end of a period of 1 year, which I now also refer to as the initial and final population.

I guess you are thinking that the period of 1 year I mention above is one over which there is a catastrophe, i.e. a large reduction in population. However, I meant a random unconditioned year. I have now updated "period of 1 year" to "any period of 1 year (e.g. a calendar year)". Population has been growing, so my ratio between the initial and final population will have a high chance of being lower than 1.

Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.

Hi @MichaelStJules, I am tagging you because I have updated the following sentence. If there is a period longer than 1 year over which population decreases, the power laws describing the ratio between the initial and final population of each of the years following the 1st could have diff... (read more)

I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and "new technology" could totally facilitate that.

To clarify, my estimates are supposed to account for unknown unknowns. Otherwise, they would be any orders of magnitude lower.... (read more)

Thanks for the comment, David! I agree all those effects could be relevant. Accordingly, I assume that saving a life in catastrophes (periods over which there is a large reduction in population) is more valuable than saving a life in normal times (periods over which there is a minor increase in population). However, it looks like the probability of large population losses is sufficiently low to offset this, such that saving lives in normal time is more valuable in expectation.

Thanks for clarifying! I agree B) makes sense, and I am supposed to be doing B) in my post. I calculated the expected value density of the cost-effectiveness of saving a life from the product between:

  • A factor describing the value of saving a life ().
  • The PDF of the ratio between the initial and final population (), which is meant to reflect the probability of a catastrophe.
4
Owen Cotton-Barratt
4d
I'm worried I'm misunderstanding what you mean by "value density". Could you perhaps spell this out with a stylized example, e.g. comparing two different interventions protecting against different sizes of catastrophe?

if you're primarily trying to model effects on extinction risk

I am not necessarily trying to do this. I intended to model the overall effect of saving lives, and I have the intuition that saving a life in a catastrophe (period over which there is a large reduction in population) conditional on it happening is more valuable than saving a life in normal times, so I assumed the value of saving a life increases with the severity of the catastrophe. One can assume preventing extinction is specially important by selecting a higher value for  ("the el... (read more)

4
Owen Cotton-Barratt
5d
Sorry, I understood that you primarily weren't trying to model effects on extinction risk. But I understood you to be suggesting that this methodology might be appropriate for what we were doing in that paper -- which was primarily modelling effects on extinction risk.

Thanks for the critique, Owen! I strongly upvoted it.

I'm worried that modelling the tail risk here as a power law is doing a lot of work, since it's an assumption which makes the risk of very large events quite small (especially since you're taking a power law in the ratio

Assuming the PDF of the ratio between the initial and final population follows a loguniform distribution (instead of a power law), the expected value density of the cost-effectiveness of saving a life would be constant, i.e. it would not depend on the severity of the catastrophe. However,... (read more)

5
Owen Cotton-Barratt
4d
Without having dug into them closely, these numbers don't seem crazy to me for the current state of the world. I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and "new technology" could totally facilitate that. Unfortunately, for the relevant part of the curve (catastrophes large enough to wipe out large fractions of the population) we have no data, so we'll be relying on theory. My understanding (based significantly just on the "mechanisms" section of that wikipedia page) is that dragon kings tend to arise in cases where there's a qualitatively different mechanism which causes the very large events but doesn't show up in the distribution of smaller events. In some cases we might not have such a mechanism, and in others we might. It certainly seems plausible to me when considering catastrophes (and this is enough to drive significant concern, because if we can't rule it out it's prudent to be concerned, and risk having wasted some resources if we turn out to be in a world where the total risk is extremely small), via the kind of mechanisms I allude to in the first half of this comment.

I'm confused by some of the set-up here. When considering catastrophes, your "cost to save a life" represents the cost to save that life conditional on the catastrophe being due to occur? (I'm not saying "conditional on occurring" because presumably you're allowed interventions which try to avert the catastrophe.)

My language was confusing. By "pre- and post-catastrophe population", I meant the population at the start and end of a period of 1 year, which I now also refer to as the initial and final population. I have now clarified this in the post.

I assume ... (read more)

4
Owen Cotton-Barratt
5d
Sorry, this isn't speaking to my central question. I'll try asking via an example: * Suppose we think that there's a 1% risk of a particular catastrophe C in a given time period T which kills 90% of people * We can today make an intervention X, which costs $Y, and means that if C occurs then T will only kill 89% of people * We pay the cost $Y in all worlds, including the 99% in which C never occurs * When calculating the cost to save a life for X, do you: * A) condition on C, so you save 1% of people at the cost of $Y; or * B) don't condition on C, so you save an expected 0.01% of people at a cost of $Y? I'd naively have expected you to do B) (from the natural language descriptions), but when I look at your calculations it seems like you've done A). Is that right?

Thanks for all your comments, Owen!

That paper was explicitly considering strategies for reducing the risk of human extinction.

My expected value density of the cost-effectiveness of saving a life, which decreases as catastrophe severity increases, is supposed to account for longterm effects like decreasing the risk of human extinction.

6
Owen Cotton-Barratt
5d
I think if you're primarily trying to model effects on extinction risk, then doing everything via "proportional increase in population" and nowhere directly analysing extinction risk, seems like a weirdly indirect way to do it -- and leaves me with a bunch of questions about whether that's really the best way to do it.

Thanks for the comment, Michael!

Also, to be clear, this is supposed to be ~immediately pre-catastrophe and ~immediately post-catastrophe, right? (Catastrophes can probably take time, but presumably we can still define pre- and post-catastrophe periods.)

I have updated the post changing "pre- and post-catastrophe population" to "population at the start and end of a period of 1 year", which I now also refer to as the initial and final population.

You're modelling the cost-effectiveness of saving a life conditional on catastrophe here, right?

No. It is supposed ... (read more)

4
MichaelStJules
4d
Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months. I just wasn't sure what exactly you meant. Another intepretation would be that P_f is the total post-catastrophe population, summing over all future generations, and I just wanted to check that you meant the population at a given time, not aggregating over time.

Hi Sarah,

I have just published a post somewhat related to yours where I wonder whether saving lives in normal times is better to improve the longterm future than doing so in catastrophes.

Thanks, Bob! Yes, I can see the images now.

Hello again Lizka,

When you’re voting, don't do the following:

  • “Mass voting” on many instances of a user’s content simply because it belongs to that user
  • Using multiple accounts to vote on the same post or comment

We will almost certainly ban users if we discover that they've done one of these things. 

Relatedly, I was warned a few days ago that the moderation system notified the EA Forum team that I had voted on another user's comments with concerningly high frequency. I wonder whether this may be a false positive for 2 reasons:

  • I have gone through lots of
... (read more)

Hi Bob,

The 1st 2 images are not loading for me.

The last image is fine.

4
Bob Jacobs
6d
Hi Vasco, Thanks for notifying me, it's probably because the EA forum switched editors (and maybe also compression algorithm) a while back. I remember struggling with adding images to the forum in the beginning, and now it's easy. I looked at some old posts and it seems like those that used .png and .jpg still displayed them, so people don't need to check up on their old posts. I looked at older comments and both .jpg and .png still work from three years back. I also found an .png in a comment from five years back. Hopefully this helps the devs with debugging, and maybe people should check on their .jpg comments from four years ago or older (mine were jpegs). I reuploaded them and they were visible in another browser, so I think it should be good now.

Hi Lizka,

Have you considered running a survey to get a better sense of the voting norms users are following?

To be clear: what I'm interested in here is human extinction (not any broader conception of "existential catastrophe"), and the bet is about that.

Agreed.

On the question of priors, I liked AGI Catastrophe and Takeover: Some Reference Class-Based Priors. It is unclear to me whether extinction risk has increased in the last 100 years. I estimated an annual nuclear extinction risk of 5.93*10^-12, which is way lower than the prior for wild mammals of 10^-6.

4
Greg_Colbourn
8d
I see in your comment on that post, you say "human extinction would not necessarily be an existential catastrophe" and "So, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?". To be clear: what I'm interested in here is human extinction (not any broader conception of "existential catastrophe"), and the bet is about that.
4
Greg_Colbourn
8d
See my comment on that post for why I don't agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]). 1. ^ But too be clear, global catastrophic / civilisational collapse risk from nuclear is relatively high (these often get conflated with "extinction"). 2. ^ Not only do I think it will kill every last human, I think it's quite likely it will wipe out all known carbon-based life.

Would be interested to see your reasoning for this, if you have it laid out somewhere.

I have not engaged so much with AI risk, but my views about it are informed by considerations in the 2 comments in this thread. Mammal species usually last 1 M years, and I am not convinced by arguments for extinction risk being much higher (I would like to see a detailed quantitative model), so I start from a prior of 10^-6 extinction risk per year. Then I guess the risk is around 10 % as high as that because humans currently have tight control of AI development.

Is it ma

... (read more)
4
Greg_Colbourn
8d
Interesting. Obviously I don't want to discourage you from the bet, but I'm surprised you are so confident based on this! I don't think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasn't been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).

Thanks! Could you also clarify where is your house, whether you live there or elsewhere, and how much cash you expect to have by the end of 2027 (feel free to share the 5th percentile, median and 95th percentile)?

4
Greg_Colbourn
8d
It's in Manchester, UK. I live elsewhere - renting currently, but shortly moving into another owned house that is currently being renovated (I've got a company managing the would-be-collateral house as an Airbnb, so no long term tenants either). Will send you more details via DM. Cash is a tricky one, because I rarely hold much of it. I'm nearly always fully invested. But that includes plenty of liquid assets like crypto. Net worth wise, in 2027, assuming no AI-related craziness, I would be expect it to be in the 7-8 figure range, 5-95% maybe $500k-$100M).

Thanks for following up, Greg! Strongly upvoted. I will try to understand how I can set up a contract describing the bet with your house as collateral.

Could you link to the post on X you mentioned?

I will send you a private message with Bryan's email.

Definitely seek legal advice in the country and subdivision (e.g., US state) where Greg lives!

You may think of this as a bet, but I'll propose an alternative possible paradigm: it's may be a plain old promissory note backed by a mortgage. That is, a home-equity loan with an unconditional balloon payment in five years. Don't all contracts in which one party must perform in the future include a necessarily implied clause that performance is not necessary in the event that the human race goes extinct by that time? At least, I don't plan on performing any of m... (read more)

4
Greg_Colbourn
10d
Cool, thanks. I link to one post in the comment above. But see also.

Grantees are obviously welcome to do this.

Right, but they have not been doing it. So I assume EA Funds would have to at least encourage applicants to do it, or even make it a requirement for most applications. There can be confidential information in some applications, but, as you said below, applicants do not have to share everything in their public version.

That said, my guess is that this will make the forum less enjoyable/useful for the average reader, rather than more.

I guess the opposite, but I do not know. I am mostly in favour of experimenting with a few applications, and then deciding whether to stop or scale up.

We've started working on this [making some application public], but no promises. My guess is that making public the rejected applications is more valuable than accepted ones, eg on Manifund. Note that grantees also have the option to upload their applications as well (and there are less privacy concerns if grantees choose to reveal this information).

Manifund already has quite a good infrastructure for sharing grants. However, have you considered asking applicants to post a public version of their applications on EA Forum? People who prefer to remain anonym... (read more)

4
Linch
12d
Grantees are obviously welcome to do this. That said, my guess is that this will make the forum less enjoyable/useful for the average reader, rather than more. 

Nice discussion, Owen and titotal!

But it doesn't make sense to me to analogise it to a risk in putting up a sail.

I think this depends on the timeframe. Over a longer one, looking into the estimated destroyable area by nuclear weapons, nuclear risk looks like a transition risk (see graph below). In addition, I think the nuclear extinction risk has decreased even more than the destroyable area, since I believe greater wealth has made society more resilient to the effects of nuclear war and nuclear winter. For reference, I estimated the current annual nuclear... (read more)

Hi JP,

Minor. In the messages' page, the screen is currently broken down into 2, with my past conversations on the left, and the one I am focussing on on the right. I would rather have an option to expand the screen on the right such that I do not see the conversations pane on the left, or have an option to hide the conversations pane on the left.

If the point is donor oversight/evaluation/accountability, then I am hesitant to give the grantmakers too much information ex ante on which grants are very likely/unlikely to get the public writeup treatment.

Great point! I had not thought about that. On the other hand, I assume grantmakers are already spending more time on assessing larger grants. So I wonder whether the distribution of the granted amount is sufficiently heavy-tailed for grantmakers to be influenced to spend too much time on them due to their higher chance of being selected for having long... (read more)

4
Jason
13d
I think I have an older discussion about managing conflicts of interest in grantmaking the back of my mind. I think that's part of why I would want to see a representative sample of small-to-midsize grant writeups.

Caleb and Linch randomly selected grants from each group.

I think your procedure to select the grants was great. However, would it become even better by making the probability of each grant being selected proportional to its size? In theory, donors should care about the impact per dollar (not impact per grant), which justifies weighting by grant size. This may matter because there is significant variation in grant size. The 5th and 95th percentile amount granted by LTFF are 2.00 k$ and 169 k$, so, specially if one is picking just a few grants as you did (as... (read more)

4
Linch
13d
Thank you! This is a good point; your analysis makes a lot of sense to me.

I'm late to the discussion, but I'm curious how much of the potential value would be unlocked -- at least for modest size / many grants orgs like EA Funds -- if we got a better writeup for a random ~10 percent of grants (with the selection of the ten percent happening after the grant decisions were made).

Great suggestion, Jason! I think that would be over 50 % as valuable as detailed write-ups for all grants.

Actually, the grants which were described in this post on the Long-Term Future Fund (LTFF) and this on the Effective Altruism Infrastructure Fund (EAI... (read more)

On the nitpick: After reflection, I'd go with a mixed approach (somewhere between even odds and weighted odds of selection). If the point is donor oversight/evaluation/accountability, then I am hesitant to give the grantmakers too much information ex ante on which grants are very likely/unlikely to get the public writeup treatment. You could do some sort of weighted stratified sampling, though.

I think grant size also comes into play on the detail level of the writeup. I don't think most people want more than a paragraph, maximum, on a $2K grant. I'd hope f... (read more)

Hi Elizabeth,

I think mentioning CE may have distracted from the main point I wanted to convey. 1 paragraph or sentence is not enough for the public to assess the cost-effectiveness of a grant.

I think downvoting comments like the above is harmful:

  • It disincentivises people to make honest efforts to express dissenting views, thus contributing towards creating echo chambers.
  • It increases polarisation.
    • I assume people who believe they are unfairly downvoted will tend to unfairly downvote others more.
    • I had initially not upvoted/downvoted the original post, but then felt like I should downvote the post given my perception that the comment above was unfairly downvoted. I do not endorse my initial retaliatory reaction, and have now upvoted the post as a way of trying to counter my bad intuitions.

For what it's worth, I upvoted and disagree-voted, because I think I think you're wrong and because you clearly put thought and effort into your writing, and produced the sort of content I think we should generally have more of, even though I'm annoyed locally that "don't do either" is a much easier comment to write than "here's the analysis you asked for", leading to the only serious comments on the post being people stating your view.

Thanks for the analysis, Hauke! I strongly upvoted it.

The mean "CCEI's effect of shifting deploy$ to RD&D$" of 5 % you used in UseCarlo is 12.5 (= 0.05/0.004) times the mean of 0.4 % respecting your Guesstimate model. Which one do you stand by? Since you say "CCEI is part of a much smaller coalition of only hundreds of key movers and shakers", the smaller effect of 0.4 % (= 1/250) would be more appropriate assuming the same contribution for each member of such coalition.

I think you had better estimate the expected cost-effectiveness in t/$ instead... (read more)

4
Hauke Hillebrandt
14d
Great comment - thanks so much! Regarding CCEI's effect of shifting deploy$ to RD&D$: * Yes, in the Guesstimate model the confidence intervals went from 0.1% to 1% lognormally distributed, with a mean of ~0.4% * With UseCarlo I used a metalog distribution with parameters 0%, 0.1%, 2%, 10%, resulting in a mean of ~5% So you're right, there is indeed about an order of magnitude difference between the two estimates: * This is mostly driven by my assigning some credence to the possibility that CCEI might have had as much as a 10% influence, which I wouldn't rule out entirely. * However, the confidence intervals of the two estimates are overlapping. * I agree this is the weakest part of the analysis. As I highlighted, it's a guesstimate motivated by the qualitative analysis that CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D. * I think both estimates are roughly valid given the information available. Without further analysis, I don't have enough precision to zero in on the most likely value.  * I lost access to UseCarlo during the writeup and the  after the analysis was delayed for quite some time (I had initially pitched it to FTX as an Impact NFT). * I just wanted to get the post out rather than delay further. With more resources, one could certainly dig deeper and make the analysis more rigorous and detailed. But I hope it provides a useful starting point for discussion and further research.  * One could further nuance this analysis e.g. by calculating marginal effect of our $1M on US climate policy philanthropy at the current ~$55M level vs. what it's now. Thanks also for the astute observation about estimating expected cost-effectiveness in t/$ vs $/t. You raise excellent points and I agree it would be more elegant to estimate it as t/$ for the reasons you outlined. I really appreciate you taking the time to engage substantively with the post.

Hi Saul,

I assume Open Philanthropy (OP) has built quantitative models which estimate GCR, but probably just simple ones, as I would expect a model like Tom's to be published. There may be concerns about information hazards in the context of bio risk, but OP had an approach to quantify it while mitigate them:

A second, less risky approach is to abstract away most biological details and instead consider general ‘base rates’. The aim is to estimate the likelihood of a biological attack or accident using historical data and base rates of analogous scenarios, an

... (read more)

Thanks for the comment, Ryan. I agree that report by Joseph Carlsmith is quite detailed. However, I do not think it is sufficiently quantitative. In particular, the probabilities which are multiplied to obtain the chance of an existential catastrophe are directly guessed, as opposed to resulting from detailed modelling (in contrast to the AI takeoff speeds calculated in Tom's report). Joseph was mostly aiming to qualitatively describe the arguments, as opposed to quantifying the risk:

My main hope, though, is not to push for a specific number, but rather to

... (read more)

Thanks for following up!

I already do work for an animal welfare organization.

Cool!

I looked at the study and it's not about Belgian hospitals, so it doesn't really apply to me.

Even if there is no direct nearterm financial cost, you could plausibly use the time saved by not donating a kidney to generate at least 1.05 $? For example, I guess the cost to your parents would be higher than this, so they might be happy to donate a few dollars to THL for you not to donate a kidney. Even if not now, the time you save may also increase your income by more than 1.05 ... (read more)

  • I don't think we can just equate 15 QALY's to 15 DALY's, these are different metrics. I tried to find a converter online but it looks like there is no consensus on how to do that.
  • Additional benefits of making someone an EA include: doing part-time/volunteer work (e.g. currently everyone at effectief geven is a volunteer), and them making other people EAs (spreading the generated expected QALY's further).
  • Same things could be said for veganism, which is less likely with a one time donation since people don't make that part of their identity. But the cost-eff
... (read more)
Answer by Vasco GriloApr 10, 20248
3
13
1
1

Thanks for your willingness to contribute to a better world, Bob!

Have you considered not donating either of those, and instead support the best animal welfare interventions?

  • If donating a kidney averts 15 DALY (= (10 + 20)/2), and costs you 1 k$[1], the cost-effectiveness would be 0.015 DALY/$, which is similar to the cost-effective of GiveWell's top charities of around 0.01 DALY/$ (50 DALY per 5 k$).
  • However, I think corporate campaigns for chicken welfare, like the ones supported by The Humane League (THL), have a cost-effectiveness of 14.3 DALY/$ (= 8.20*
... (read more)
-1
Vasco Grilo
14d
I think downvoting comments like the above is harmful: * It disincentivises people to make honest efforts to express dissenting views, thus contributing towards creating echo chambers. * It increases polarisation. * I assume people who believe they are unfairly downvoted will tend to unfairly downvote others more. * I had initially not upvoted/downvoted the original post, but then felt like I should downvote the post given my perception that the comment above was unfairly downvoted. I do not endorse my initial retaliatory reaction, and have now upvoted the post as a way of trying to counter my bad intuitions.

Hi Vasco,

I already do work for an animal welfare organization. I looked at the study and it's not about Belgian hospitals, so it doesn't really apply to me. Some of the listed costs aren't present (I don't have a wage so no wage loss), those that are present are mostly paid for by the state (travel, accommodation, medical...) and those that aren't are paid for by my parents (housework). The only one that applies is "Small cash payments for grocery items (eg, tissue paper)" which is negligible, so the expected DALY per dollar is extremely high.

In Belgium yo... (read more)

Thanks for sharing, Conrad!

This project was completed as part of contract work with Open Philanthropy

I wonder whether Open Philanthropy (OP) should have commisioned an analysis like yours much sooner. More importantly, I am a little confused about why OP would want to know how much is being spent on biosecurity & pandemic preparedness at this stage. Neglectedness may be a good heuristic to identify promising areas at an early stage, but OP has now granted 191 M$ to interventions in that area, according to their grants' database on 17 February 2024. So ... (read more)

(I do wonder if there's an effect where because we communicate our overall views so much, we become a more obvious/noticeable target to criticize.)

To be clear, the criticisms I make in the post and comments apply to all grantmakers I mentioned in the post except for CE.

Well, I haven't read CE's reports. Have you?

I have skimmed some, but the vast majority of my donations have been going to AI safety interventions (via LTFF). I may read CE's reports in more detail in the futute, as I have been moving away from AI safety to animal welfare as the most promisin... (read more)

Thanks for the post, Emre!

“We would never ask child abusers to commit less child abuse, so we can’t ask other people to reduce their animal product consumption. We must ask them to end it.”

I would ask whatever more cost-effectively decreases child abuse. If child abuse was as prevalent as the consumption of factory-farmed animals, I guess asking for a reduction of it, while simultaneously highlighting that the optimal amount of child abose is 0, would tend to be more cost-effective than just demanding the end of child abuse.

I assume there should be a portf... (read more)

Great post, Matthew! Misaligned AI not being clearly bad is one of the reasons why I have been moving away from AI safety to animal welfare as the most promising cause area. In my mind, advanced AI would ideally be aligned with expected total hedonistic utilitarianism.

Hello,

In the XPT, you ask about the probability of catastrophes where the fraction of the initial population which survives is 90 % (= 1 - 0.10) and 6*10^-7 (= 5*10^3/(8*10^9)). I think it would be good if you asked about intermediate fractions (e.g. 10 %, 1 %, ..., and 10^-7). I guess many forecasters are implicitly estimating their probabilities of extinction from population losses of 99 % to 99.99 %, whereas reaching a population of 5 k (as in your questions about extinction risk) would require a population loss of 99.99994 % (= 1 - 5*10^3/(8*10^9)), wh... (read more)

Hi Daniel,

In 2024, 4% of AI R&D tasks are automated; then 32% in 2026, and then singularity happens around when I expected, in mid 2028. This is close enough to what I had expected when I wrote the story that I'm tentatively making it canon.

Relatedly, what it your median time from now until human extinction? If it is only a few years, I would be happy to set up a bet like this one.

Thanks for the comment, Ezrah!

I'd be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)

Yes, I think career changes and additional effective donations would be better proxies for impact than outputs like quality-adjusted attendances and calls. Relatedly:

Animal Advocacy Careers (AAC) ran two longitudinal studies aiming to compare and test the cost-effectiveness of our one-to-one advising calls and our online course. Various forms of these two types of careers advice service have been used by pe

... (read more)

Thanks for the detailed comment. I strongly upvoted it.

I don't think wordcount is a good way to measure information shared.

I don't think wordcount is a fair way to estimate (useful) information shared. I mean it's easy to write many thousands of words that are uninformative, especially in the age of LLMs. I think to estimate useful information shared, it's better to see how much people actually know about your work, and how accurate their beliefs are. 

I agree the number of words per grant is far from an ideal proxy. At the same time, the median length... (read more)

6
Linch
15d
(Appreciate the upvote!) At a high level, l I'm of the opinion that we practice better reasoning transparency than ~all EA funding sources outside of global health, e.g. a) I'm responding to your thread here and other people have not, b) (I think) people can have a decent model of what we actually do rather than just an amorphous positive impression, and c) I make an effort of politely delivering messages that most grantmakers are aware of but don't say because they're worried about flack.  It's really not obvious that this is the best use of limited resources compared to e.g. engaging with large donors directly or having very polished outwards-facing content, but I do think criticizing our lack of public output is odd given that we invest more in it than almost anybody else. (I do wonder if there's an effect where because we communicate our overall views so much, we become a more obvious/noticeable target to criticize.)  Well, I haven't read CE's reports. Have you? I think you have a procedure-focused view where the important thing is that articles are written, regardless of whether they're read. I mostly don't personally think it's valuable to write things people don't read. (though again for all I know CE's reports are widely read, in which case I'd update!) And it's actually harder to write things people want to read than to just write things. (To be clear, I think there are exceptions. Eg all else equal, writing up your thoughts/cruxes/BOTECs are good even if nobody else reads them because it helps with improving quality of thinking).  We've started working on this, but no promises. My guess is that making public the rejected applications is more valuable than accepted ones, eg on Manifund. Note that grantees also have the option to upload their applications as well (and there are less privacy concerns if grantees choose to reveal this information).
Load more