All of Toby_Ord's Comments + Replies

Thanks Danny! 

For those who are unaware, Benefit Cost Analysis (BCA) is the main form of quantitative evaluation of cost-effectiveness in the US, UK, and beyond. Two of the biggest problems with it as a method are its counting of the value of a dollar equally for all people (which leads to valuing people themselves in proportion to their income) and a high, constant, discount rate. So the action on both of these are big improvements to quantitative priority setting in the US!

Yes, I completely agree. When I was exploring questions about wild animal welfare almost 20 years ago, I was very surprised to see how the idea of thinking about individual animals' lives was so foreign to the field.

While I only had time to quickly read this piece, I agree with much of what I read and think it is a great contribution to the literature. 

To clarify my own view, I think animals matter a great deal — now and over the longterm future. The focus on humanity in my work is primarily because we are the only moral agents we know of. In philosophical terms, this means that humanity has immense instrumental value. If we die, then as far as we know, there is nothing at all striving to shape the future of Earth (or our whole galaxy) towards what is good or jus... (read more)

4
BrownHairedEevee
2mo
I didn't write the paper, but thank you for the comment, Prof. Ord! I appreciate your perspective. I also personally am not sold on the biosphere having negative overall value. I think the immense number of sentient beings that spend large portions of their lives suffering makes it a real possibility, but I am not 100% sure that utilitarianism is true when it comes to balancing wild animal welfare and broader ecological health. I think that humanity needs to spend more effort figuring out what is ultimately of value, and because the ecological view has been dominant in environmental ethics to date, I believe the WAW view deserves more consideration and to be integrated into humanity's thought process even if it is not ultimately accepted.

Thanks Shakeel! This is an excellent post. There are so many big wins in the EA community that it can be hard to see the big picture and keep them all in mind. We all strive to see the big picture, but sometimes even for us, the latest drama can quickly drive the big successes from our memory. So big picture summaries like this are very useful.

Indeed, summaries of the whole period of EA achievements, or summaries that include setbacks as well as wins would also be good.

Very exciting news! Welcome Zach — this is making me feel optimistic about EA in 2024.

Toby_Ord
5mo118
34
0
25

Thanks so much for writing this, and even more for all you've done to help those less fortunate than yourself.

I'm glad I did that Daily Politics spot! It was very hard to tell in the early days how impactful media work was (and it still is!) so examples like this are very interesting.

Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!

Oh, I definitely agree that the guilt narrative has some truth to it too, and that the final position must be some mix of the two, with somewhere between a 10/90 and 90/10 split. But I'd definitely been neglecting the 'we got used' narrative, and had assumed others were too (though aprilsun's comment suggests I might be incorrect about that).

I'd add that for different questions related to the future of EA, the different narratives change their mix. For example, the 'we got used' narrative is at its most relevant if asking about 'all EAs except Sam'. But if... (read more)

8
Michael_PJ
7mo
I do think it's an interesting question whether EA is prone to generate Sams at higher than the base rate. I think it's pretty hard to tell from a single case, though.

This is a very interesting take, and very well expressed. You could well be right that the narrative that 'we got used' is the most correct simple summary for EAs/EA. And I definitely agree that it is an under-rated narrative. There could even be psychological reasons for that (EAs being more prone to guilt than to embarassment?).

I note that even if P(FTX exists | EA exists) were quite a bit higher than P(FTX exists | ~EA exists), that could be compatible with your suggested narrative of EAs being primarily marks/victims. To reuse your example, if you were... (read more)

2
Michael_PJ
7mo
Yes, this is a good point. I notice that I don't in fact feel very moved by arguments that P(FTX exists | EA exists) is higher, I think for this reason. So perhaps I shouldn't have brought that argument up, since I don't think it's the crux (although I do think it's true, it's just over-determining the conclusion).

I agree the primary role of EAs here was as victims, and that presumably only a couple of EAs intentionally conspired with Sam. But I wouldn't write it off as just social naivete; I think there was also some negligence in how we boosted him, e.g.:

  • Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
  • Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.
  • Despite warnings from early-Alameda peo
... (read more)
Toby_Ord
7mo133
18
2
29

Nick is being so characteristically modest in his descriptions of his role here. He was involved in EA right from the start — one of the members of Giving What We Can at launch in 2009 — and he soon started running our first international chapter at Rutgers, before becoming our director of research. He contributed greatly to the early theory of effective altruism and along with Will and I was one of the three founding trustees of the Centre for Effective Altruism. I had the great pleasure of working with him in person for a while at Oxford University, before he moved back to the States to join Open Philanthropy. He was always thoughtful, modest, and kind. I'm excited to see what he does next.

Thanks so much for writing this Arthur.

I'm still interested in the possibilities of changing various aspects of how the EA community works, but this post does a great job of explaining important things that we're getting right already and — perhaps more importantly — in helping us remember what it is all about and why we're here.

The term 'humanity' is definitely intended to be interpreted broadly. I was more explicit about this in The Precipice and forgot to reiterate it in this paper. I certainly want to include any worthy successors to homo sapiens. But it may be important to understand the boundary of what counts. A background assumption is that the entities are both moral agents and moral patients — capable of steering the future towards what matters and for being intrinsically part of what matters. I'm not sure if those assumptions are actually needed, but they were guiding m... (read more)

I think I may have been a bit too unclear about which things I found more promising than others. Ultimately the chapter is more about the framework, with a few considerations added for and against each of the kinds of idealised changes, and no real attempt to be complete about those or make all-things-considered judgments about how to rate them. Of the marginal interventions I discuss, I am most excited about existential-risk reduction, followed by enhancements.

As to your example, I feel that I might count the point where the world became permanently contr... (read more)

I think "open to speed-ups" is about right. As I said in the quoted text, my conclusion was that contingent speed-ups "may be possible". They are not an avenue for long-term change that I'm especially excited about. The main reason for including them here was to distinguish them from advancements (these two things are often run together) and because they fall out very natural as one of the kinds of natural marginal change to the trajectory whose value doesn't depend on the details of the curve. 

That said, it sounds like I think they are a bit more lik... (read more)

I think that is indeed a good analogy. In the chapter, I focus on questions of marginal changes to the curve where there is a small shock to some parameter right now, and then its effects play out. If one is instead asking questions that are less about what we now could do to change the future, but are about how humanity over deep time should act to have a good future, then I think the optimal control techniques could be very useful. And I wouldn't be surprised if in attempting to understand the dynamics, there were lessons for our present actions too.

This naturally lends itself to the idea that there are two main ways of improving the future: increasing  and increasing .

I think this is a useful two factor model, though I don't quite think of avoiding existential risk just as increasing τ. I think of it more as increasing the probability that it doesn't just end now, or at some other intermediate point. In my (unpublished) extensions of this model that I hint at in the chapter, I add a curve representing the probability of surviving to time t (or beyond), and then think of raising this curv... (read more)

One common issue with “existential risk” is that it’s so easy to conflate it with “extinction risk”. It seems that even you end up falling into this use of language. You say: “if there were 20 percentage points of near-term existential risk (so an 80 percent chance of survival)”. But human extinction is not necessary for something to be an existential risk, so 20 percentage points of near-term existential risk doesn’t entail an 80 percent chance of survival.

In this case I meant 'an 80 percent chance of surviving the threat with our potential intact', or of... (read more)

Length of advancements / delays

 Good point about the fact that I was focusing on some normal kind of economic trajectory when assessing the difficulty of advancements and delays. Your examples are good, as is MichaelStJules' comment about how changing the timing of transformative AI might act as an advancement.

>Aliens

You are right that the presence or absence of alien civilisations (especially those that expand to settle very large regions) can change things. I didn't address this explicitly because (1) I think it is more likely that we are alone in the affectable universe, and (2) there are many different possible dynamics for multiple interacting civilisations and it is not clear what is the best model. But it is still quite a plausible possibility and some of the possible dynamics are likely enough and simple enough that they are worth analysing.

I'm not su... (read more)

>Continued exponential growth

I agree that there is a kind of Pascallian possibility of very small probabilities of exponential growth in value going for extremely long times. If so, then advancements scale in value with v-bar and with τ. This isn't enough to make them competitive with existential risk reduction ex ante as they are still down-weighted by the very small probability. But it is perhaps enough to cause some issues. Worse is that there is a possibility of growth in value that is faster than an exponential, and this can more than offset the ve... (read more)

Thanks Will, these are great comments — really taking the discussion forwards. I'll try to reply to them all over the next day or so.

As you say, there is an issue that some of these things might really be enhancements because they aren't of a fixed size. This is especially true for those that have instrumental effects on the wellbeing of individuals, as if those effects increase with total population or with the wellbeing level of those individuals, then they can be enhancements. So cases where there is a clearly fixed effect per person and a clearly fixed number of people who benefit would be good candidates.

As are cases where the thing is of intrinsic non-welfarist value. Though there... (read more)

You may be right that this is more than a 'tweak'. What I was trying to imply is that the framework is not wildly different. You still have graphs, integrals over time, decomposition into similar variables etc — but they can behave somewhat differently. In this case, the resources approach is tracking what matters (according to the cited papers) faithfully until expansion has ended, but then is indifferent to what happens after that, which is a bit of an oversimplification and could cause problems.

I like your example of speed-up in this context of large-sc... (read more)

I've thought about this a lot and strongly think it should be the way I did it in this chapter. Otherwise all the names are off by one derivative. e.g. it is true that for one of my speed-ups, one has to temporarily accelerate, but you also have to temporarily change every higher derivative too, and we don't name it after those. The key thing that changes permanently and by a fixed amount is the speed.

It's because I'm not intending the trajectories to be a measure of all value in the universe, only the value we affect through our choices. When humanity goes extinct, it no longer contributes intrinsic value through its own flourishing and it has no further choices which could have instrumental value, so you might expect its ongoing value to be zero. And it would be on many measures. 

Setting up the measures so that it goes to zero at that point also greatly simplifies the analysis, and we need all the simplification we can get if we want to get a gra... (read more)

Good point. I may not be clear enough on this in the piece (or even in my head). I definitely want to value animal wellbeing (positive and negative) in moral choices. The question is whether this approach can cleanly account for that, or if it would need to be additional. Usually, when I focus on the value of humanity (rather than all animals) it is because we are the relevant moral agent making the choices and because we have tremendous instrumental value — in part because we can affect other species for good or for ill. That works for defining existentia... (read more)

Fai
9mo12
4
0

Thank you for the reply, Toby. I agree that humanity have instrumental values to all sentient beings. And I am glad that you want to include animals when you say shaping the future.

This might work, though does have some surprising effects, such as that even after our extinction, the trajectory might not stay at zero

I wonder why you think this would be surprising? If humans are not the only beings who have intrinsic values, why is it surprising that there will be values left after humans go extinct?

One approach would be to say the curve represents the instrumental effects of humanity on intrinsic value of all beings at that time. This might work, though does have some surprising effects, such as that even after our extinction, the trajectory might not stay at zero, and different trajectories could have different behaviour after our extinction.

This seems very natural to me and I'd like us to normalise including non-human animal wellbeing, and indeed the wellbeing of any other sentience, together with human wellbeing in analyses such as these. 

We should use a different term than "humanity". I'm not sure what the best choice is, perhaps "Sentientity" or "Sentientkind".

Even in a model where it really matters when one arrives at each point in space (e.g. if we were merely collecting the flow of starlight, and where the stars burning out set the relevant end point for useful expansion) I believe the relevant number is still very small: 4/R where R is the relevant radius of expansion in light years. The 4 is because this grows as a quartic. For my model, it is 3/R, where R is 16.7 billion light years.  

The text of the book is almost completely finalised now, and my guess is that it will be out early 2023 (print publishing is usually slow). It is an academic book, published with Oxford University Press, and I think it will be open access too. My guess is that sales will be only the tiniest fraction of those of WWOTF, and that free downloads will be bigger, but still a small fraction.

I can confirm it's open access. I know this because my team is translating it into Spanish and OUP told us so. Our aim is to publish the Spanish translation simultaneously with the English original (the translation will be published online, not in print form).

9
Vasco Grilo
9mo
Nitpick, I think you meant 2024 instead of 2023 (unless you are planning on doing some time-travelling!).

This is a great point and would indeed have been good to include.

There is a plausible case that advancing AI roughly advances everything (as well as having other effects on existential risk …) making advancements easier if they are targetting AI capabilities in particular and making advancements targetting everything else harder. That said, I still think it is easier to reduce risk by 0.0001% than to advance AI by 1 year — especially doing the latter while not increasing risk by a more-than-compensating amount. 

Thanks!

The idea is that advancing overall progress by a year means getting a year ahead on social progress, political progress, moral progress, scientific progress, and of course, technological progress. Given that our progress in these is the result of so many people's work, it seems very hard to me for a small group to lead to changes that improve that by a whole year (even over one's lifetime). Whereas a small group leading to changes that lead to the neglected area of existential risk reduction of 0.0001% seems a lot more plausible to me — in fact, I'd guess we've already achieved that. 

I'm not sure about Tarsney's model in particular, but on the model I use in The Edges of Our Universe, a year's delay in setting out towards the most distant reaches of space results in reaching about 1 part in 5 billion fewer stars before they are pulled beyond our reach by cosmic expansion. If reaching them or not is the main issue, then that is comparable in value to a 1 in 5 billion existential risk reduction, but sounds a lot harder to achieve.

6
Toby_Ord
9mo
Even in a model where it really matters when one arrives at each point in space (e.g. if we were merely collecting the flow of starlight, and where the stars burning out set the relevant end point for useful expansion) I believe the relevant number is still very small: 4/R where R is the relevant radius of expansion in light years. The 4 is because this grows as a quartic. For my model, it is 3/R, where R is 16.7 billion light years.  

I thought section 6.1 on 'Cumulative risk and intergenerational coordination' was very good. Many people (including those promoting action on existential risk) neglect how important it is that we get risk down and then keep it down. This is a necessary part of what I call existential security in my section of The Precipice devoted to our longterm strategy. And it is not easy to achieve. One strategy I talk about is implementing a constitution for humanity, committing future generations to work within their own diminishing share of a finite existential risk... (read more)

Re the third 'mistake', there is a long history of thinking that carrying capacity is a decent proxy for long term population. Is it a good proxy? Probably not in many situations. Is it better than extrapolating out the current growth dynamics for millions of years? Probably. My guess is that it is a simple defensible rough model here. And by laying out separate estimates for different scales being reached, there is also a pretty good sensitivity analysis. I think you are right that this could be improved by adding cases of permanent population collapse to... (read more)

Regarding the 'second mistake', I don't see how it is very different from the first one. If there remains high average per-period risk, then the expected benefits of avoiding nearterm risk is indeed greatly lowered — from 'overwhelming' to just 'large'. In effect, it starts to approach the level of risk to currently existing people (which is sometimes argued to be so large already that we don't need to talk about future generations).

But it doesn't seem unreasonable to me for Millet and Snyder-Beattie to model things with an expected lifespan for humanity e... (read more)

8
David Mathers
9mo
'But risk staying high would be a more contentious assumption.'  Why? I take it this is really the heart of the disagreement, so it would be good to hear what makes you think this. 

Regarding the 'first mistake', you correctly show that survival of a species for a billion years requires reaching a low per-period level of risk (averaging roughly 1 in a billion per year). I don't disagree with that and I doubt Bostrom would either. No complex species has yet survived so long, but that is partly because there have been less than 1 billion years since complex life began. But there are species (or at least families) that have survived almost the whole time, such as the Nautilus (which has survived 500 million years). So risk levels compara... (read more)

Hi David,

Thanks for sharing this.

My main reaction is that I was puzzled by the framing. It is obviously an allusion to Parfit's 'Five Mistakes in Moral Mathematics'. But there are major differences. Parfit was objecting to pieces of maths that are embedded in our common-sense understanding of morality, such as the share of the total view. He argued that the maths of morality is different to that. You are complaining about three modelling assumptions about the empirics of risk over time and population over time. You don't present any disagreement with the m... (read more)

Thanks Toby! Comments much appreciated. 

I thought section 6.1 on 'Cumulative risk and intergenerational coordination' was very good. Many people (including those promoting action on existential risk) neglect how important it is that we get risk down and then keep it down. This is a necessary part of what I call existential security in my section of The Precipice devoted to our longterm strategy. And it is not easy to achieve. One strategy I talk about is implementing a constitution for humanity, committing future generations to work within their own diminishing share of a finite existential risk... (read more)

Re the third 'mistake', there is a long history of thinking that carrying capacity is a decent proxy for long term population. Is it a good proxy? Probably not in many situations. Is it better than extrapolating out the current growth dynamics for millions of years? Probably. My guess is that it is a simple defensible rough model here. And by laying out separate estimates for different scales being reached, there is also a pretty good sensitivity analysis. I think you are right that this could be improved by adding cases of permanent population collapse to... (read more)

Regarding the 'second mistake', I don't see how it is very different from the first one. If there remains high average per-period risk, then the expected benefits of avoiding nearterm risk is indeed greatly lowered — from 'overwhelming' to just 'large'. In effect, it starts to approach the level of risk to currently existing people (which is sometimes argued to be so large already that we don't need to talk about future generations).

But it doesn't seem unreasonable to me for Millet and Snyder-Beattie to model things with an expected lifespan for humanity e... (read more)

Regarding the 'first mistake', you correctly show that survival of a species for a billion years requires reaching a low per-period level of risk (averaging roughly 1 in a billion per year). I don't disagree with that and I doubt Bostrom would either. No complex species has yet survived so long, but that is partly because there have been less than 1 billion years since complex life began. But there are species (or at least families) that have survived almost the whole time, such as the Nautilus (which has survived 500 million years). So risk levels compara... (read more)

extremely ideological group houses like early Toby Ord's

You haven't got your facts straight. I have never lived in a group house, let alone an extremely ideological one.

Huh! Retracted. I'm sorry.

Thank you so much for writing so clearly and compellingly about what happened to you and the subculture which encourages treating women like this.

There is no place for such a subculture in EA (or anywhere else).

8
Lucretia
10mo
Thank you for your kind words.

Thanks Vasco,

Interesting analysis. Here are a few points in response:

  • It is best to take my piece as an input into a calculation of whether voting is morally justified on account of changing the outcome — it is an input in the form of helping work out the probability the outcome gets changed. More analysis would be needed to make the overall moral case — especially in the many voting systems that have multiple levels, where it may be much more important to vote in marginal seats and much less in safe seats, so taking the average may be inappropriate.
  • You mak
... (read more)
4
Vasco Grilo
1y
Thanks for the reply, great points! I think this relates to this (great!) post from Brian Tomasik. I was assuming the doubling of real GDP corresponded to all the benefits. I can see 0.1 % of the 30 k$ going to something of extremely high value, but it can arguably lead to extremely high disvalue too. In addition, I would say it is unclear whether increasing real GDP is good, because it does not forcefully lead to differential progress (e.g. it can increase carbon emissions, consumption of animal products, and shorten AI timelines). Some longtermist interventions seem more robustly good, not those around AI, but ones like patient philanthropy, or increasing pandemic preparedness, or civilisation resilience. Repeating my analysis for existential risk: * Based on the existential risk between 2021 and 2120 of 1/6 you guessed in The Precipice (which I really liked!), the annual existential risk is 0.182 % (= 1 - (1 - 1/6)^(1/100)). * If one assumes the benefit of one vote corresponds to eliminating 2 times the annual existential risk per capita (because maybe only half of the population votes), it would be 4.55*10^-13 (= 2*(0.182 %)/(8*10^9)). I may be underestimating the annual existential risk per capita due to high-income countries having greater influence, but underestimating due to existential risk arguably being lower early. * Assuming the LTFF has a cost-effectiveness of 3.16 bp/G$, which is the geometric mean of the lower and upper bound proposed by Linchuan Zhang here, the benefit of one vote would amount to donating about 1.44 $ (= 4.55/3.16) to the LTFF. * For a salary of 20 $/h, 1.44 $ is earned in 4 min. This is similar to what I got before, and continues to suggest one should not spend much time voting if the counterfactual is working on 80,000 Hours' most pressing problems, or earning to support intervention aiming to solve them. * However, the analysis is so uncertain now that one can arrive to a different conclusion with reasonable inputs. So my

Something like that. Geoffrey Brennan and Lomasky indeed present the binomial formula and suggest using it in their earlier work, but I haven't found a case of them using it in any particular way (which could get results like Jason Brennan's or results like Banzhaf's), so didn't want to pin this on them. So I cited Jason Brennan whose uses it to produce these crazily low probabilities in his book. It is possible that Jason Brennan didn't do the calculations himself and that someone else did (either Geoffrey Brennan and Lomasky or others), but I don't know and haven't found an earlier source for the crazy numbers.

2[anonymous]1y
Jason Brennan discusses the background on it a bit here - https://bleedingheartlibertarians.com/2012/10/on-the-probability-of-being-decisive/ Gelman responds in the comments

A caution re interpreting of my argument in two-level elections:

One might read the above piece as an argument that voting is generally worthwhile. But note that the two-level structure of many elections (at least in countries without PR) does dampen the value of voting for many voters. e.g. if you are in the 10%+ of the US population who live in California, then not only are you very unlikely to cast a decisive vote to win the state's electoral college votes (since the probability that the underdog wins is very low), but it is very likely that in the situa... (read more)

I've just seen your comment further down:

What we’re arguing for is a criterion: governments should fund all those catastrophe-preventing interventions that clear the bar set by cost-benefit analysis and altruistic willingness to pay. One justification for funding these interventions is the justification provided by CBA itself, but it need not be the only one. If longtermist justifications help us get to the place where all the catastrophe-preventing interventions that clear the CBA-plus-AWTP bar are funded, then there’s a case for employing those justifications too.

which answers my final paragraph in the parent comment, and suggests that we are not too far apart.

1
EJT
1y
Yes, I think so! And thanks again for making this point (and to weeatquince as well). I've written a new paragraph emphasising a more reasonable, less conservative estimate of benefit-cost ratios. I expect it'll probably go in the final draft, and I'll edit the post here to include it as well (just waiting on Carl's approval). I think this is right (and I must admit that I don't know that much about the mechanics and success-rates of international agreements) but one cause for optimism here is Cass Sunstein's view about why the Montreal Protocol was such a success (see Chapter 2): cost-benefit analysis suggested that it would be in the US's interest to implement unilaterally and that the benefit-cost ratio would be even more favourable if other countries signed on as well. In that respect, the Montreal Protocol seems akin to prospective international agreements to share the cost of GCR-reducing interventions.

we chose ‘unacceptable’ because we also think there would be something normatively problematic about it.

I'm not so sure about that. I agree with you that it would be normatively problematic in the paradigm case of a policy that imposed extreme costs on current society for very slight reduction in total existential risk — let's say, reducing incomes by 50% in order to lower risk by 1 part in 1 million.

But I don't know that it is true in general.

First, consider a policy that was inefficient but small — e.g. one that cost $10 million to the US govt, but reduc... (read more)

3
EJT
1y
I wouldn't call a small policy like that 'democratically unacceptable' either. I guess the key thing is whether a policy goes significantly beyond citizens' willingness to pay not only by a large factor but also by a large absolute value. It seems likely to be the latter kinds of policies that couldn't be adopted and maintained by a democratic government, in which case it's those policies that qualify as democratically unacceptable on our definition.

Thanks Elliott,

I guess this shows that the case won't get through with the conservative rounding off that you applied here, so future developments of this CBA would want to go straight for the more precise approximations in order to secure a higher evaluation.

Re the possibility of international agreements, I agree that they can make it easier to meet various CBA thresholds, but I also note that they are notoriously hard to achieve, even when in the interests of both parties. That doesn't mean that we shouldn't try, but if the CBA case relies on them then t... (read more)

3
Toby_Ord
1y
I've just seen your comment further down: which answers my final paragraph in the parent comment, and suggests that we are not too far apart.

Ah, that's what I meant by the value your candidate would bring. There isn't any kind of neutral outcome to compare them against, so I thought it clear that it meant in comparison to the other candidate. Evidently not so clear!

I should note that I don’t see a stronger focus on character as the only thing we should be doing to improve effective altruism! Indeed, I don’t even think it is the most important improvement. There have been many other suggestions for improving institutions, governance, funding, and culture in EA that I’m really excited about. I focused on character (and decision procedures) in my talk because it was a topic I hadn’t seen much in online discussions about what to improve, because I have some distinctive expertise to impart, and because it is something that everyone in EA can work on.

This feels like a missed opportunity.

My sense is that this was an opportunity to give a "big picture view" rather than note a particular underrated aspect. 

If you think there were more important improvements, why not say them, at least as context, in one of the largest forums on this topic?

Thanks for your work :)

I value the “it is something that everyone in EA can work on“-sentiment.

Particularily in these times, I think it is excellent to find things that (1) seem robustly good and (2) we can broadly agree on as a community to do more of. It can help alleviate feelings of powerlessness (and help with this is, I believe, one of the things we need.)

This seems to be one of those things. Thanks!

1[comment deleted]1y
5
NewLeaf
1y
I'm really grateful that you gave this address, especially with the addition of this comment. Would you be willing to say more about which other suggestions for improvement you would be excited to see adopted in light of the FTX collapse and other recent events? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.

I think you have put your finger on a key aspect with the coldness requirement. 

When ice cream is melted or coke is lukewarm, they both taste far too sweet. I've long had a hypothesis that we evolved some kind of rejection of foods that taste too sweet (at least in large quantity) and that by cooling them down, they taste less sweet (overcoming that rejection mechanism) but we still get increased reward when the sugar content enters our bloodstream. I feel that carbonation is similar (flat coke tastes too sweet), so that the cold and carbonation could... (read more)

1
Elityre
4mo
Why might humans evolve a rejection of things that taste to sweet? What fitness reducing thing does "eating oversweet things" correlate with? Or is it a spandrel of something else?
5
Elityre
7mo
If this is true, it's fascinating, because it suggest that our preference for cold and carbonation are a kind of specification gaming!  
2
MichaelStJules
1y
It could just be attention. If something would otherwise be too sweet, but some other part of it is salient (coldness, carbonization, bitterness, saltiness), those other parts will take some of your attention away from its sweetness, and it'll seem less sweet.

Overshooting:

Second, the argument overshoots. Given other plausible claims, building policy on this premise would not only lead governments to increase their efforts to prevent catastrophes. It would also lead them to impose extreme costs on the present generation for the sake of miniscule reductions in the risk of existential catastrophe.

I disagree with this. 

First, I think that many moral views are compelled to find the possibility that their generation permanently eradicates all humans from the world to be especially bad and worthy of much extra ef... (read more)

4
EJT
1y
The argument we mean to refer to here is the one that we call the ‘best-known argument’ elsewhere: the one that says that the non-existence of future generations would be an overwhelming moral loss because the expected future population is enormous, the lives of future people are good in expectation, and it is better if the future contains more good lives. We think that this argument is liable to overshoot. I agree that there are other compelling longtermist arguments that don’t overshoot. But my concern is that governments can’t use these arguments to guide their catastrophe policy. That’s because these arguments don’t give governments much guidance in deciding where to set the bar for funding catastrophe-preventing interventions. They don’t answer the question, ‘By how much does an intervention need to reduce risks per $1 billion of cost in order to be worth funding?’. This seems like a good target to me, although note that $400b is our estimate for how much it would cost to fund our suite of interventions for a decade, rather than for a year.
Load more