All of JustinShovelain's Comments + Replies

Nice succinct post. 

A related reference you might like https://www.lesswrong.com/posts/NjYdGP59Krhie4WBp/updating-utility-functions that goes into getting it to care about what we want before it knows what we want. 

2
Ward A
1y
Thanks! Agree that values have to be learned indirectly (as we do), but I'm either skeptical or confused about what could uncharitably be framed as "inserting a math equation into an AI". Making an idea technically precise, or specifying it with mathy symbols, does not guarantee that it can be used for training a neural network (which realistically is the only paradigm of AI we can work with, I think). We will probably have to find a balance between a precisely specified loss function we believe will indirectly lead to good values, and a loss function that we have a large amount of corresponding training data for. Feel free to ignore the following section, it's a hurried rambling before breakfast. ||One way of potentially getting extra power out a limited set of data, is to first 1) constrain the amount of information the network receives about its true loss function per unit of computation (i.e. an information bottleneck), maybe via something like training on a larger "batch size". And then 2) use something like temporal difference learning so the network gets trained on a proxy (an estimator) for the true loss function that it refines over time. This would amount to having a fluid proximal (high-information) reward function and a fixed distal (low-information) reward function. The brain does something similar, and it predictably leads to mesa-optimisation if the influence from proximal rewards dominates the distal ones. The trick would then be to balance their respective learning rates so the distal rewards always constrains the evolution of the network more than the estimator. While the evolution of DNA resulted in a mesa-optimisation "catastrophe", we have the advantage that we get to monitor the network and intervene intelligently on the learning process in real-time.||

Other perspectives that are arguably missing or extensions that can be done are:

... (read more)

Update from Convergence Analysis

In July, we published the following research posts:

... (read more)

Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:

... (read more)
9
Ozzie Gooen
4y
I'll give a +1 for Convergence. I've known the team for a while and worked with Justin a few years back. It's a bit on the theoretical side of prioritization, but that sort of thinking often does lead to more immediate value. My impression is also that more funding could be quite useful to them, if anyone is reading this considering.
7
David_Althaus
4y
Thank you! Agree, these posts are excellent. For what it's worth, I share Gwern's pessimistic conclusion about the treatment of psychopathy. Other Dark Tetrad traits—especially if they are less pronounced—might be more amenable to treatment though I'm not especially optimistic. However, even if effective treatment options existed, the problem remains that the most dangerous individuals are unlikely to ever be motivated to seek treatment (or be forced to do so).

Following Sean here I'll also describe my motivation for taking the bet.

After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn't take him up on the bet people wouldn't take the issue as seriously, nor take explicitl... (read more)

Nice find! Hopefully it updates soon as we learn more. What is your interpretation of it in terms of mortality rate in each age bracket?

Sure, I'll take the modification to option (i). Thanks Sean.

4
Sean_o_h
4y
10:1 on the original (1 order of magnitude) it is.

Strong kudos for betting. Your estimates seem quite off to me but I really admire you putting them to the test. I hope, for the sake of the world, that you are wrong.

Hmm... I will take you up on a bet at those odds and with those resolution criteria. Let's make it 50 GBP of mine vs 250 GBP of yours. Agreed?

I hope you win the bet!

(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)

Agreed, thank you Justin. (I also hope I win the bet, and not for the money - while it is good to consider the possibility of the most severe plausible outcomes rigorously and soberly, it would be terrible if it came about in reality). Bet resolves 28 January 2021. (though if it's within an order of magnitude of the win criterion, and there is uncertainty re: fatalities, I'm happy to reserve final decision for 2 further years until rigorous analysis done - e.g. see swine flu epidemiology studies which updated fatalities upwards significantly seve... (read more)

Good points! I agree but I'm not sure how significant those effects will be though... Have an idea of how we'd in a principled precise way update based on those effects?

2
SamuelKnoche
4y
It's difficult. You'd probably need a model of every country since state capacity, health care, information access... can vary widely.

Updating the Fermi calculation somewhat:

  • P(it goes world scale pandemic) = 1/3, no updates (the metaculus estimate reference in another comment counteracted my better first principles estimation)
  • P(a particular person gets it | it goes world scale pandemic) = 1/2, updating based on the reproduction number of the virus
  • P(a particular person dies from it | a particular person gets it) = 0.09, updating based on a guess of 1/2 probability rare equipment is needed and a random guess of 1/2 probability fatality without it. 1/2*1/30 + 1/2*((Probability of pneumonia
... (read more)
2
avturchin
4y
It looks like it almost not affecting children; a person of older age should give himself a higher estimate of being affected.
4
SamuelKnoche
4y
If the death rate is really that high, then we should significantly update P(it goes world scale pandemic) and P(a particular person gets it | it goes world scale pandemic) downwards as it would cause governments and individuals to put a lot of resources towards prevention. One can also imagine that P(a particular person dies from it | a particular person gets it) will go down with time as resources are spent on finding better treatment and a cure.

Hmm. interesting. This goes strongly against my intuitions. In case of interest I'd be happy to give you 5:1 odds that this Fermi estimate is at least an order of magnitude too severe (for a small stake of up to £500 on my end, £100 on yours). Resolved in your favour if 1 year from now the fatalities are >1/670 (or 11.6M based on current world population); in my favour if <1/670.

(Happy to discuss/modify/clarify terms of above.)


Edit: We have since amended the terms to 10:1 (50GBP of Justin's to 500GBP of mine).

Nice list!

Adding to it a little:

  • Avoid being sick with two things at once or being sick with something else immediately before.
  • When it comes to supplements the evidence and effect sizes are not that strong. Referencing examine.com and what I generally remember, I roughly think that the best immune system strengthening supplements would be zinc and echinacea with maybe mild effects from other things like vitamin C, vitamin D, and whey protein. There may be a couple additional herbs that could do something but it's unclear they are safe to take for a lon
... (read more)

The exponential growth curve and incubation period also have implications about "bugging out" strategies where you get food and water, isolate, and wait for it to be over. Let's estimate again:

Assuming as in the above comment we are 1/3 of the exponential climb (in reported numbers) towards the total world population and it took a month, in two more months (the end of March) we would expect it to reach saturation. If the infectious incubation period is 2 weeks (and people are essentially uniformly infectious during that time) then you'... (read more)

3
Denkenberger
4y
Have you looked at how long pandemics have lasted in the past? I think it's a lot longer than five weeks.

I base it on what Greg mentions in his reply about the swine flu and also the reasoning that the reproduction number has to go below 1 for it to stop spreading. If its normal reproduction number before people have become immune (after being sick) is X (like 2 say), then to get the reproduction number below 1, (susceptible population proportion) * (normal reproduction number) < 1. So with a reproduction number of 2 the proportion who get infected will be 1/2.

This assumes that people have time to become immune so for a fast spreading virus more than that ... (read more)

4
mike_mclaren
4y
Just a note that the reproduction number can decrease for other reasons; in particular if and as the disease spreads you might expect greater public awareness, CDC guidance, travel bans, etc leading to greater precaution and less opportunity for infected individuals to infect others.

It's based on a few facts and swirling them around in my intuition to choose a single simple number.

Long invisible contagious incubation period (seems somewhat indicated but maybe is wrong) and high degree of contagiousness (the Ro factor) implies it is hard to contain and should spread in the network (and look something like probability spreading in a Markov chain with transition probabilities roughly following transportation probabilities).

The exponential growth implies that we are only a few doublings away from world scale pandemic (also note we&ap... (read more)

9
JustinShovelain
4y
The exponential growth curve and incubation period also have implications about "bugging out" strategies where you get food and water, isolate, and wait for it to be over. Let's estimate again: Assuming as in the above comment we are 1/3 of the exponential climb (in reported numbers) towards the total world population and it took a month, in two more months (the end of March) we would expect it to reach saturation. If the infectious incubation period is 2 weeks (and people are essentially uniformly infectious during that time) then you'd move the two month date forward by two weeks (the middle of March). Assuming you don't want to take many risks here you might have a week buffer in front (the end of the first week of March). Finally, after symptoms arise people may be infectious for a couple weeks (I believe this is correct, anyone have better data?). So the sum total amount of time for the isolation strategy is about 5 weeks (and may start as early as the end of the first week of March or earlier depending on transportation and supply disruptions). Governments by detecting cases early or restricting travel, and citizens by isolating and using better hygiene, could change these numbers and dates. (note: for future biorisks that may be more severe this reasoning is also useful)
2
Jess Kinchen Smith
4y
Thanks. I've updated towards your estimate but 1/3 still seems high by my (all too human) intuitions.

I wonder what sort of Fermi calculation we should apply to this? My quick (quite possibly wrong) numbers are:

  • P(it goes world scale pandemic) = 1/3, if I believe the exponential spreading math (hard to get my human intuition behind) and the long, symptom less, contagious incubation period
  • P(a particular person gets it | it goes world scale pandemic) = 1/3, estimating from similar events
  • P(a particular person dies from it | a particular person gets it) = 1/30, and this may be age or preexisting condition agnostic and could, speculatively, increase if vital equipment is too scarce (see other comment)

=> P(death of a randomly selected person from it) = ~1/300

What are your thoughts?

Updating the Fermi calculation somewhat:

  • P(it goes world scale pandemic) = 1/3, no updates (the metaculus estimate reference in another comment counteracted my better first principles estimation)
  • P(a particular person gets it | it goes world scale pandemic) = 1/2, updating based on the reproduction number of the virus
  • P(a particular person dies from it | a particular person gets it) = 0.09, updating based on a guess of 1/2 probability rare equipment is needed and a random guess of 1/2 probability fatality without it. 1/2*1/30 + 1/2*((Probability of pneumonia
... (read more)
4
Peter Wildeford
4y
What do you base this one on?
3
Jess Kinchen Smith
4y
How do you arrive at 1/3 here?

How confident are you that it affects mainly older people or those with preexisting health conditions? Are the stats solid now? I vaguely recall that SARS and MERS (possibly the relevant reference class), were age agnostic.

8
Howie_Lempel
4y
Here's a chart of odds of death by age that was tweeted by an epidmiology professor at Hopkins. I can't otherwise vouch for the reliability of the data and caveat that mortality data sucks this early in an epidemic. https://twitter.com/JustinLessler/status/1222108497556279297
4
Sean_o_h
4y
MERS was pretty age-agnostic. SARS had much higher mortality rates in >60s. All the current reports from China claim that it affects mainly older people or those with preexisting health conditions. Coronavirus is a broad class including everything from the common cold to MERS; not sure there's good ground to anchor too closely to SARS or MERS as a reference class.

By total mortality rate do you mean total number of people eventually or do you mean percentage?

If the former I agree.

If you mean the later... I see it as a toss up between the selection effect of the more severely affected being the ones we know have it (and so decreasing the true mortality rate relative to the published numbers) and time for the disease to fully progress (and so increasing the true mortality rate relative to the published numbers).

Thanks for the article. One thing I'm wondering about that has implications for the large scale pandemic case is how much equipment for "mechanical ventilation and sometimes ECMO (pumping blood through an artificial lung for oxygenation)" does society have and what are the consequences of not having access to such equipment? Would such people die? In that case the fatality rate would grow massively to something like 25 to 32%.

Whether there is enough equipment would depend upon how many get sick at once, can more than one person use the same... (read more)

It's true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that I'd expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. It's lacking the examples and explicit connections though that make this salient. In a future post that I've got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. I'll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.

5
Aaron Gertler
4y
If you plan on future posts which will apply elements of this writing, that's a handy thing to note in the initial post!  You could also see what I'm advocating here as "write posts that bring the base and specifics together"; I think that will make material like this easier to understand for people who run across it when it first gets posted. If you're working on posts that rely on a collection of concepts/definitions, you could also consider using Shortform posts to lay out the "pieces" before you assemble them in a post. None of this is mandatory, of course; I just want to lay out what possibilities exist given the Forum's current features.

Yes, the model in itself doesn't say that we'll tend towards competitiveness. That comes from the definition of competitiveness I'm using here and is similar to Robin Hanson's suggestion. "Competitiveness" as used here just refers to the statistical tendency of systems to evolve in certain ways - it's similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.

(With the reply I was trying to point in the direction of this increasing entropy like definition.)


The reason why we'd expect it to maximize competitiveness is in the sense that: what spreads spreads, what lives lives, what is able to grow grows, what is stable is stable... and not all of this is aligned with humanity's ultimate values; the methods that sometimes maximize competitiveness (like not internalizing external costs, wiping out competitors, all work and no play) much of the time don't maximize achieving our values. What is competitive in this sense is however dependent on the circumstances and hopefully we can align it better. I hope this clarifies.

4
MichaelA
4y
I think I had the same thought as Ozzie, if I'm interpreting his comment correctly. My thought was that this all seems to make sense, but that, from the model itself, I expected the second last sentence to be something like: And then that'd seem to lead to a suggestion like "Therefore, if the world is at this Pareto frontier or expected to reach it, a key task altruists should work on may be figuring out ways to either expand the frontier or increase the chances that, upon reaching it, we skate towards what we value rather than towards competitiveness." That is, I don't see how the model itself indicates that, upon reaching the frontier, we'll necessarily move towards greater competitiveness, rather than towards humanity's values. Is that idea based on other considerations from outside of the model? E.g., that self-interest seems more common than altruism, or something like Robin Hanson's suggestion that evolutionary pressures will tend to favour maximum competitiveness (think I heard Hanson discuss that on a podcast, but here's a somewhat relevant post). (And I think your reply is mainly highlighting that, at the frontier, there'd be a tradeoff between competitiveness and humanity's values, right? Rather than giving a reason why the competitiveness option would necessarily be favoured when we do face that tradeoff?)

I agree with your thoughts. Competitiveness isn't necessarily fully orthogonal to common good pressures but there generally is a large component that is, especially in tough cases.

If they are not orthogonal then they may reach some sort of equilibrium that does maximize competitiveness without decreasing common good to zero. However, in a higher dimensional version of this it becomes more likely that they are mostly orthogonal (apriori, more things are orthogonal in higher dimensional spaces) and if what is competitive can sorta change with time walk... (read more)

Nice! I would argue though that because we do not consider all dimensions at once generally speaking and because not all game theory situations ("games") lend themselves to this dimensional expansion we may, for all practical purposes, sometime find ourselves in this situation.

Overall though, the idea of expanding the dimensionality does point towards one way to remove this dynamic.

My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.

1
Michael_Wiebe
4y
I guess I'm expecting diminishing returns to be an important factor in practice, so I wouldn't place much weight on an analysis that excludes crowdedness.

Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I'd like to see more articles like this.


One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The C... (read more)

1
Michael_Wiebe
4y
Hi Justin, thanks for the comment. I'm in favor of reducing the complexity of the framework, but I'm not sure if this is the right way to do it. In particular, estimating "importance only" or "importance and tractability only" isn't helpful, because all three factors are necessary for calculating MU/$. A cause that scores high on I and T could be low MU/$ overall, due to being highly crowded. Or is your argument that the variance (across causes) in crowdedness is negligible, and therefore we don't need to account for diminishing returns in practice?