A

arvomm

278 karmaJoined Oct 2021arvomm.com

Bio

I am a researcher at Rethink Priorities' Worldview Investigations Team. I also do work for Oxford's Global Priorities Institute. Previously I was a research analyst at the Forethought Foundation for Global Priorities Research. I took the role after completing the MPhil in Economics at Oxford University. Before that, I studied Mathematics and Philosophy at the University of St Andrews.

Find out more about me here.

Posts
2

Sorted by New

Comments
15

Thank you for adding various threads to the conversation Arepo! I don't disagree with what I take to be your main point: benign AI and interstellar travel are likely to have a big impact. I will say though, while their success might significantly reduce risk, and for a long time, any given intervention is unlikely to make major progress towards them. Hence, at the intervention level, I'm tempted to remain sceptical about the abundance of interventions that dramatically reduce risk for a long time.

Thank you for deeply engaging with our work and for laying out your thoughts on what you think are the most promising paths forward, like searching for contingent and persistent interventions, applying a medium-term lens to global health and animal welfare, or investigating fanaticism. I thought your post was well-written, to the point and enjoyable.

Hi Siebe, yes, all the scenarios of this report assume positive value at all times. I don’t think it’s certain that this will happen which is why the concluding remarks mention “investigating value trajectories that feature negative value” as a possible extension. So, yes, I completely agree this is something to look into in more depth.

It's good to hear that you agree extinction is the better term in this framework. Though I think it makes sense to talk about the more general 'existential' term in the exposition sometimes. In particular, for entirely pedagogical reasons, I decided to leave it with the original terminology in the summary since readers who are already familiar with the original models might skim this post or miss that endnote, and the definition of risk hasn't changed. I see this report, and the footnote, as asking researchers that, from hereon, we use extinction when the maths are set up like they are here. All that said, I've indeed noticed instances after the summary where the conceptual accuracy would be improved by making that swap. Thank you again; I'll keep a closer eye on this, especially in future revised versions of the full report.

Good to hear from you Michael! Some thoughts:

  • You're right that the Tarsney paper was an important driver in bringing cubic to this framework. That's why it's a key source in the value cases summary. Modelling uncertainty is an excellent next step for various scenarios.
  • Thanks very much for the link to David's response. I hadn't seen that! 
  • Good to have the link to Carl's thread, it'll be valuable to run these models and get some visualisations with that 1 in a million estimate too!

It's true there are other scenarios that would recover infinite value. And the proof fails, as mentioned in the convergence section, with changes like , or when the logistic cap  and we end up in the exponential case.

All that said, it is plausible that the universe has a finite length after all, which would provide that finite upper bound. Heat death, proton decay or even just the amount of accessible matter could provide physical limits. It'd be great to see more discussions on this informed by updated astrophysical theories.

Thank you for all the comments JWS, I found your excitement contagious.

Some thoughts on your thoughts:

  • I couldn't agree more that there'd be a lot of value from laying out parameter configurations. We have some more work coming out as part of this sequence that aims to help fill this gap!
  • I think it'd be great to see some survey data on what the commonly assumed risk patterns and valued trajectories are in the EA community. I've made a push from my little corner to hopefully get some data on common views. Whichever they are, you're right to point out the immense differences in what could they imply.
  • I'm really happy that you found the notebook useful. I'll make sure to update the GitHub with any new features and code discussions.

Thank you very much Dan for your comments and for looking into the ins and outs of the work and highlighting various threads that could improve it.

There are two quite separate issues that you brought up here. First about infinite value, which can be recovered with new scenarios and, second, the specific parameter defaults used. The parameters the report used could be reasonable but also might seem over-optimistic or over-pessimistic, depending on your background views.

I totally agree that we should not anchor on any particular set of parameters, including the default ones. I think this is a good opportunity to emphasise one of the limitations in the concluding remarks saying that "we should be especially cautious about over-updating from specific quantitative conclusions". As you hinted, one important reason for this is that the chosen parameters do not have enough data behind them and are not puzzles-free.

Some thoughts sparked by the comments in this thread:

  • You're totally right to point out that the longer we survive in expectation the longer the simulation needs to be run for us to observe convergence.
  • I agree that risk is unlikely to be time-invariant for long eras, and I'm really excited about bringing in more realistic structures, like the one you suggest: an enriched Time of Perils with decaying risk. I'm hoping WIT or other interested researchers do more to spell out what these structures imply about the value of risk mitigation.
  • On the flip side of the default r_low seeming too high, if seen from the point of view of the start of a century, it'd imply a  probability of surviving each century.
  • A tiny r_low might be more realistic, though I confess lacking strong intuitions either way about how risk will behave in the coming centuries, let alone millennia. In my mind, risk could decay or increase, and I do hope the patterns so far, for example these last 500 years, are nothing to go by. 
  • Your point about conditional probabilities is a good way to introduce and think about thought experiments on risk profiles. It made me think that a civilisation like the one you describe surviving different hurdles could be modelled under Great Filters where you indeed use an r_low orders of magnitude smaller than the current default and you'd get something that fits the picture you'd suggest much better, even without introducing any modifications like the decaying risk. Let me know if you play around with the code to visualise this.

Thank you very much Roman!

  • I used blender, modelled the 3D spheres, rendered it and photoshop for the text.
  • Discrete-time was inherited from the previous framework (OAT). It can be simpler, but continuous is sometimes more tractable and better suited for models emphasising other features.  For example, when modelling economic growth directly, or when thinking about utility, or when we want to express a hazard rate that is micro-founded on some risk mechanism, those models would generally be better expressed in continuous time. This recent paper is a good example of the typical setups economics papers use in continuous time.

Thank you very much for your words Vasco! And thank you for catching those formatting typos, I've corrected them now.

In order:

  1. Two underscores seemed to have got lost in translation to markdown! Should be there now.
  2. You're right to point out that, in this context, but it isn't exactly . I was using that approximation for the exposition but should have made that clearer, especially in the code. I've made minor corrections to reflect this.
  3. I'll also improve the phrasing to make the sentence you mentioned on  clearer.

Thanks again!

Load more