David Johnston

Longtermist slogans that need to be retired

"What matters most about our actions is their very long term effects."

I think my takeaway from this slogan is: given limited evaluation capacity + some actions under consideration, a substantial proportion of this capacity should be debited to thinking about long term effects.

It could be false: maybe it's easy to conclude that nothing important can be known about the long term effects. However, I don't think this has been demonstrated yet.

Replicating and extending the grabby aliens model

I haven't fully grokked this work yet, but I really appreciate the level of detail you've explained it in.

Replicating and extending the grabby aliens model

It seems plausible a significant fraction of ICs will choose to become GCs. Since matter and energy are likely to be instrumentally useful to most ICs, expanding to control as much volume as they can (thus becoming a GC) is likely to be desirable to many ICs with diverse aims.

Also, if an IC is a mixture of grabby and non-grabby elements, it will become a GC essentially immediately.

Milan Griffes on EA blindspots

Now I wish there were numbers in the OP to make referencing easier

Edit: thanks

[Cross-post] A nuclear war forecast is not a coin flip

You've gotten me interested in looking at total extinction risk as a follow up, are you interested in working together on it?

On expected utility, part 2: Why it can be OK to predictably lose

From the title, I thought this was going to be a defense of being money pumped!

[Cross-post] A nuclear war forecast is not a coin flip

Do you know of work on this off the top of your head? I know if Ord has his estimate of 6% extinction in the next 100 years, but I don't know of attempts to extrapolate this or other estimates.

I think for long timescales, we wouldn't want to use an exchangeable model, because the "underlying risk" isn't stationary

[Cross-post] A nuclear war forecast is not a coin flip

- If you think there's an exchangeable model underlying someone else's long-run prediction, I'm not sure of a good way to try to figure it out. Off the top of my head, you could do something like this:
`def model(a,b,conc_expert,expert_forecast):`

`# forecasted distribution over annual probability of nuclear war`

`prior_rate = numpyro.sample('rate',dist.Beta(a,b))`

`with numpyro.plate('w',1000):`

`war = numpyro.sample('war',dist.Bernoulli(prior_rate),infer={'enumerate':'parallel'})`

`anywars = (war.reshape(10,100).sum(1)>1).mean()`

`expert_prediction = numpyro.sample('expert',dist.Beta(conc_expert*anywars,conc_expert*(1-anywars)),obs=expert_forecast)`

This is saying that the expert is giving you a noisy estimate of the 100 year rate of war occurrence, and then treating their estimate as an observation. I don't really know how to think about how much noise to attribute to their estimate, and I wonder if there's a better way to incorporate it. The noise level is given by the parameter conc_expert, see here for an explanation of the "concentration" parameter in the beta distribution. - I don't know! I think in general if it's an estimate for (say) 100 year risk with <= 100 years of data (or evidence that is equivalently good), then you should at least be wary of this pitfall. If there's >>100 years of data and it's a 100 year risk forecast, then the binomial calculation is pretty good.

[Cross-post] A nuclear war forecast is not a coin flip

Nice explanation, thanks

How about this:

A) Take top N interventions ranked by putting all effort into far future effects

B) Take top N interventions ranked by putting more effort into near than far future effects

(you can use whatever method you like to prioritise the interventions you investigate). Then for most measures of value, group (A) will have much higher expected value than group (B). Hence "most of the expected value is in the far future".