Michael_Wiebe

Wiki Contributions

Comments

Hits-based development: funding developing-country economists

Yeah, I recall wanting to expand on the post, but I still have to track down my old notes. I hope to do that and publish it as a main page post.

Hits-based development: funding developing-country economists

Originally wrote this in Oct 2020 and wanted a version to share publicly, hence, publishing now.

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

@Linch, I'm curious if you've taken an intermediate microeconomics course. The idea of maximizing utility subject to a budget constraint (ie. constrained maximization) is the core idea, and is literally what EAs are doing. I've been thinking for a while now about writing up the basic idea of constrained maximization, and showing how it applies to EAs. Do you think that would be worthwhile?

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

Given an intervention of value , you should be willing to pay for it if the cost  satisfies  (with indifference at equality).

If your budget constraint is binding, then you allocate it across causes so as to maximize utility.

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

Suppose there are  people and a baseline existential risk . There's an intervention that reduces risk by  (ie., not percentage points). 

Outcome with no intervention:  people die.
Outcome with intervention:  people die.

Difference between outcomes: . So we should be willing to pay up to  for the intervention, where  is the dollar value of  lives.

[Extension: account for time periods, with discounting for exogenous risks.]

I think this approach makes more sense than starting by assigning $X to 0.01% risk reductions, and then looking at the cost of available interventions.

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

If asteroid risk is 1/1,000,000, how are you thinking about a 0.01% reduction? Do you mean 0.01pp = 1/10,000, in which case you're reducing asteroid risk to 0? Or reducing it by 0.01% of the given risk, in which case the amount of reduction varies across risk categories?

The definition of basis point seems to indicate the former.

A Red-Team Against the Impact of Small Donations

This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.

I absolutely agree that those issues are very neglected, but only among the general population. They're not at all neglected within EA. Specifically, the question we should be asking isn't "do people care enough about this", but "how far will my marginal dollar go?"

To answer that latter question, it's not enough to highlight the importance of the issue, you would have to argue that:

  1. There are longtermist organizations that are currently funding-constrained,
  2. Such that more funding would enable them to do more or better work,
  3. And this funding can't be met by existing large EA philanthropists.

 

This is a good illustration of how tractability has been neglected by longtermists. Benjamin is only thinking in terms of importance and crowdedness, and not incorporating tractability.

Load More