Other perspectives that are arguably missing or extensions that can be done are:
In July, we published the following research posts:
Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:
...Nice post!
Here are a couple additional posts that I think are worth checking out by Gwern:
https://www.lesswrong.com/posts/ktr39MFWpTqmzuKxQ/notes-on-psychopathy
https://www.lesswrong.com/posts/Ft2Cm9tWtcLNFLrMw/notes-on-the-psychology-of-power
Following Sean here I'll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn't take him up on the bet people wouldn't take the issue as seriously, nor take explicitl...
Nice find! Hopefully it updates soon as we learn more. What is your interpretation of it in terms of mortality rate in each age bracket?
Strong kudos for betting. Your estimates seem quite off to me but I really admire you putting them to the test. I hope, for the sake of the world, that you are wrong.
Hmm... I will take you up on a bet at those odds and with those resolution criteria. Let's make it 50 GBP of mine vs 250 GBP of yours. Agreed?
I hope you win the bet!
(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)
Agreed, thank you Justin. (I also hope I win the bet, and not for the money - while it is good to consider the possibility of the most severe plausible outcomes rigorously and soberly, it would be terrible if it came about in reality). Bet resolves 28 January 2021. (though if it's within an order of magnitude of the win criterion, and there is uncertainty re: fatalities, I'm happy to reserve final decision for 2 further years until rigorous analysis done - e.g. see swine flu epidemiology studies which updated fatalities upwards significantly seve...
Good points! I agree but I'm not sure how significant those effects will be though... Have an idea of how we'd in a principled precise way update based on those effects?
Updating the Fermi calculation somewhat:
Hmm. interesting. This goes strongly against my intuitions. In case of interest I'd be happy to give you 5:1 odds that this Fermi estimate is at least an order of magnitude too severe (for a small stake of up to £500 on my end, £100 on yours). Resolved in your favour if 1 year from now the fatalities are >1/670 (or 11.6M based on current world population); in my favour if <1/670.
(Happy to discuss/modify/clarify terms of above.)
Edit: We have since amended the terms to 10:1 (50GBP of Justin's to 500GBP of mine).
Nice list!
Adding to it a little:
The exponential growth curve and incubation period also have implications about "bugging out" strategies where you get food and water, isolate, and wait for it to be over. Let's estimate again:
Assuming as in the above comment we are 1/3 of the exponential climb (in reported numbers) towards the total world population and it took a month, in two more months (the end of March) we would expect it to reach saturation. If the infectious incubation period is 2 weeks (and people are essentially uniformly infectious during that time) then you'...
I base it on what Greg mentions in his reply about the swine flu and also the reasoning that the reproduction number has to go below 1 for it to stop spreading. If its normal reproduction number before people have become immune (after being sick) is X (like 2 say), then to get the reproduction number below 1, (susceptible population proportion) * (normal reproduction number) < 1. So with a reproduction number of 2 the proportion who get infected will be 1/2.
This assumes that people have time to become immune so for a fast spreading virus more than that ...
It's based on a few facts and swirling them around in my intuition to choose a single simple number.
Long invisible contagious incubation period (seems somewhat indicated but maybe is wrong) and high degree of contagiousness (the Ro factor) implies it is hard to contain and should spread in the network (and look something like probability spreading in a Markov chain with transition probabilities roughly following transportation probabilities).
The exponential growth implies that we are only a few doublings away from world scale pandemic (also note we&ap...
I wonder what sort of Fermi calculation we should apply to this? My quick (quite possibly wrong) numbers are:
=> P(death of a randomly selected person from it) = ~1/300
What are your thoughts?
Updating the Fermi calculation somewhat:
How confident are you that it affects mainly older people or those with preexisting health conditions? Are the stats solid now? I vaguely recall that SARS and MERS (possibly the relevant reference class), were age agnostic.
By total mortality rate do you mean total number of people eventually or do you mean percentage?
If the former I agree.
If you mean the later... I see it as a toss up between the selection effect of the more severely affected being the ones we know have it (and so decreasing the true mortality rate relative to the published numbers) and time for the disease to fully progress (and so increasing the true mortality rate relative to the published numbers).
Thanks for the article. One thing I'm wondering about that has implications for the large scale pandemic case is how much equipment for "mechanical ventilation and sometimes ECMO (pumping blood through an artificial lung for oxygenation)" does society have and what are the consequences of not having access to such equipment? Would such people die? In that case the fatality rate would grow massively to something like 25 to 32%.
Whether there is enough equipment would depend upon how many get sick at once, can more than one person use the same...
It's true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that I'd expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. It's lacking the examples and explicit connections though that make this salient. In a future post that I've got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. I'll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.
Yes, the model in itself doesn't say that we'll tend towards competitiveness. That comes from the definition of competitiveness I'm using here and is similar to Robin Hanson's suggestion. "Competitiveness" as used here just refers to the statistical tendency of systems to evolve in certain ways - it's similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)
The reason why we'd expect it to maximize competitiveness is in the sense that: what spreads spreads, what lives lives, what is able to grow grows, what is stable is stable... and not all of this is aligned with humanity's ultimate values; the methods that sometimes maximize competitiveness (like not internalizing external costs, wiping out competitors, all work and no play) much of the time don't maximize achieving our values. What is competitive in this sense is however dependent on the circumstances and hopefully we can align it better. I hope this clarifies.
I agree with your thoughts. Competitiveness isn't necessarily fully orthogonal to common good pressures but there generally is a large component that is, especially in tough cases.
If they are not orthogonal then they may reach some sort of equilibrium that does maximize competitiveness without decreasing common good to zero. However, in a higher dimensional version of this it becomes more likely that they are mostly orthogonal (apriori, more things are orthogonal in higher dimensional spaces) and if what is competitive can sorta change with time walk...
Nice! I would argue though that because we do not consider all dimensions at once generally speaking and because not all game theory situations ("games") lend themselves to this dimensional expansion we may, for all practical purposes, sometime find ourselves in this situation.
Overall though, the idea of expanding the dimensionality does point towards one way to remove this dynamic.
My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.
Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I'd like to see more articles like this.
One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The C...
Nice succinct post.
A related reference you might like https://www.lesswrong.com/posts/NjYdGP59Krhie4WBp/updating-utility-functions that goes into getting it to care about what we want before it knows what we want.