All of Richard Bruns's Comments + Replies

AGI Ruin: A List of Lethalities

Given that technical AI alignment is impossible, we should focus on political solutions, even though they seem impractical. Running any sufficiently powerful computer system should be treated as launching a nuclear weapon. Major military powers can, and should, coordinate to not do this and destroy any private actor who attempts to do it.

This may seem like an unworkable fantasy now, but if takeoff is slow, there will be a 'Thalidomide moment' when an unaligned but not super-intelligent AI does something very bad and scary but is ultimately stopped. We should be ready to capitalize on that moment and ride the public wave of techno-phobia to put in sensible 'AI arms control' policies.

7RobBensinger1mo
Technical AI alignment isn't impossible, we just don't currently know how to do it. (And it looks hard.)
Valuing Leisure Time

Also, in my preferred specification, I do not assume that average and marginal values are the same. An average value of $70 (relative to nonexistence) is perfectly compatible with the marginal value of the last hour of leisure (relative to working) to be equal to take-home pay.  Assuming equality was just an extreme estimate to set a lower bound on things.

Valuing Leisure Time

I sense that there is some kind of deep confusion or miscommunication here that may take a while to resolve. Have you read the Life-Valuation Variability post? In it, I explain why “The Value of a Statistical Life in a country” should be understood very narrowly and specifically as “The exchange rate between lives and money when taking money away from, or giving money to, people in that country”. 

This post is not meant to tell individuals how to live their lives. There is a huge variation in individual preferences for leisure vs buying nice things. Ho... (read more)

Valuing Leisure Time

Analytical EA types often tie themselves into knots trying to make a Grand Unified Theory to base all decisions on. This does not and will not work. All models are wrong, but some models are useful. You can, and should, use different heuristics in different situations. I am not trying to program an AI that I put in charge of the world. I am merely justifying treating all people's time the same for the purpose of EA cause prioritization with donor money. 

Clearly it would break the economy to base all government policy on the assumption that consumption... (read more)

Welfare Estimation 101: Money is not Fungible

Questions in order:

  1. I never meant to make a statement that a year is better than other time units. I said year because it is the existing standard in the field. The statement was about using a life/health measurement rather than money. As the 102 post hints at, my goal is not to create 'the best' system ex nihilo; it is to build off of the precedent set in the field. So whenever an arbitrary choice has already become the standard, and it is not obviously worse than something else, I stick with it.
  2. This will inevitably be handwavey, fuzzy, and based on s
... (read more)
Welfare Estimation 101: Money is not Fungible

I agree with this; thank you for replying. (I thought I would get email alerts if anyone commented, but I guess I didn't set that up right.)

EA should wargame Coronavirus

In a sense, EA is already doing this. The Johns Hopkins Center for Health Security is heavily funded by OpenPhil, and for the past month we have been going basically full-time on this:

http://www.centerforhealthsecurity.org/resources/COVID-19/index.html

However, having a private forum where people can openly communicate things that might get distorted if quoted out of context is very useful. I've joined the group, and am available to answer any questions there.

EA should wargame Coronavirus

And if you want to participate in a prediction market, we have one running:

http://www.centerforhealthsecurity.org/our-work/disease-prediction

There are three questions on the coronavirus.