Michael_Wiebe

Topic Contributions

Comments

Results of a survey of international development professors on EA

Nice work! Sounds like movement building is very important.

Longtermist slogans that need to be retired

Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?

Longtermist slogans that need to be retired

I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken.

What do you think about MacAskill's claim that "there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear."?

Longtermist slogans that need to be retired

Do you think FTX funding lead elimination is a mistake, and that they should do patient philanthropy instead?

Critiques of EA that I want to read

Also, how are you defining "longtermist" here? You seem to be using it to mean "focused on x-risk".

Critiques of EA that I want to read

I think that these factors might be making it socially harder to be a non-longtermist who engages with the EA community, and that is an important and missing part of the ongoing discussion about EA community norms changing.

Although note that Will MacAskill supports lead elimination from a broad longtermist perspective:

Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development — especially if it’s like, “We’re actually making real concrete progress on this, on really quite a small budget as well,” that just looks really good. We can just fund this and it’s no downside as well. And I think that’s something that people might not appreciate: just how much that sort of work is valued, even by the most hardcore longtermists.

Michael_Wiebe's Shortform

But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It's only when the probabilities are tiny that you get differences in optimal policy.

The value of x-risk reduction

Using  in  is assuming constant returns to scale. If you have , you get diminishing returns.

Messing around with some python code:

from scipy.stats import norm
import numpy as np

def risk_reduction(K,L,alpha,beta):
 print('risk:', norm.cdf(-(K**alpha)*(L**beta)))
 print('expected value:', 1/norm.cdf(-(K**alpha)*(L**beta)))
 
 print('risk (2x):', norm.cdf(-((2*K)**alpha)*(L**beta)))
 print('expected value (2x):', 1/norm.cdf(-((2*K)**alpha)*(L**beta)))
 
 print('ratio:',(1/norm.cdf(-((2*K)**alpha)*(L**beta)))/(1/norm.cdf(-(K**alpha)*(L**beta))))
 
K,L = 0.5, 0.5
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 0.5
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 20
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 20
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)

The value of x-risk reduction

Are you using ?

Load More