# Michael_Wiebe's Shortform

New Comment

So far, the effective altruist strategy for global poverty has followed a high-certainty, low-reward approach. GiveWell only looks at charities with a strong evidence base, such as bednets and cash transfers. But there's also a low certainty, high reward approach: promote catch-up economic growth. Poverty is strongly correlated with economic development (urbanization, industrialization, etc), so encouraging development would have large effects on poverty. Whereas cash transfers have a large probability of a small effect, economic growth is a small probability of a large effect. (In general, we should diversify across high- and low-risk strategies.) In short, can we do “hits-based development”?

How can we affect growth? Tractability is the main problem for hits-based development, since GDP growth rates are notoriously difficult to change. However, there are a few promising options. One specific mechanism is to train developing-country economists, who can then work in developing-country governments and influence policy. Lant Pritchett gives the example of a think tank in India that influenced its liberalizing reforms, which preceded a large growth episode. This translates into a concrete goal: get X economists working in government in every developing country (where X might be proxied by the number in developed countries). Note that local experts are more likely than foreign World Bank advisors to positively affect growth, since they have local knowledge of culture, politics, law, etc.

I will focus on two instruments for achieving this goal: funding scholarships for developing-country scholars to get PhDs in economics, and funding think tanks and universities in developing countries. First, there are several funding sources within economics for developing-country students, such as Econometric Society scholarships, CEGA programs, and fee waivers at conferences. I will map out this funding space, contacting departments and conference organizers, and determine if more money could be used profitably. For example, are conference fees a bottleneck for developing-country researchers? Would earmarked scholarships make economics PhD programs accept more developing-country students? (We have to be careful in designing the funding mechanism, so that recipients don’t simply reduce funding elsewhere.) Next, I will organize fundraisers, so that donors have a ‘one-click’ opportunity to give money to hits-based development. (This might take the form of small recurring donations, or larger funding drives, or an endowment.) Then I will advertise these donation opportunities to effective altruists and others who want to promote hits-based development. (One potential large funder is the EA Global Health and Development Fund.)

My second approach is based on funding developing-country think tanks. Recently, IDRC led the Think Tank Initiative (TTI), which funded over 40 think tanks in 20 countries over 2009-2019. This program has not been renewed. My first step here would be to analyze the effectiveness of the TTI, and figure out whether it deserves to be renewed. While causal effects are hard to estimate, it seems reasonable to measure the number of think tanks, their progress under the program, and their effects on policy. To do this I will interview think tank employees, development experts, and the TTI organizers. Next I will determine what funding exists for renewing the program, as well as investigate whether a decentralized funding approach would work.

Interesting.

Related: "Some programs have received strong hints that they will be killed off entirely. The Oxford Policy Fellowship, a technical advisory program that embeds lawyers with governments that require support for two years, will have to withdraw fellows from their postings, according to Kari Selander, who founded the program."

https://www.devex.com/news/inside-the-uk-aid-cut-97771

https://www.policyfellowship.org/

I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

How much do non-nuclear countries exert control over nuclear weapons? How would the US-Soviet arms race have been different if, say, African countries were all as rich as the US, and could lobby against reckless accumulation of nuclear weapons?

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

If humanity goes extinct this century, that drastically reduces the likelihood that there are humans in our solar system 1000 years from now. So at least in some cases, looking at the effects 1000+ years in the future is pretty straightforward (conditional on the effects over the coming decades).

In order to act for the benefit of the far future (1000+ years away), you don't need to be able to track the far future effects of every possible action. You just need to find at least one course of action whose far future effects are sufficiently predictable to guide you (and good in expectation).

The initial claim is that for any action, we can assess its normative status by looking at its long-run effects. This is a much stronger claim than yours.

I don't think Will or any other serious scholar believes that it is "workable". It reads to me like a theoretical assumption that defines a particular abstract philosophy.

"Looking at every possible action, calculating the expected outcome, and then choosing the best one" is also a laughable proposition in the real world, but the notion of "utilitarianism" still makes intuitive sense and can help us weigh how we make decisions (at least, some people think so). Likewise, the notion of "longtermism" can do the same, even if looking 1000 years into the future is impossible.

is also a laughable proposition in the real world

Sure, but not even close to the same extent.

I also find utilitarian thinking to be more useful/practical than "longtermist thinking". That said, I haven't seen much advocacy for longtermism as a guide to personal action, rather than as a guide to research that much more intensively attempts to map out long-term consequences.

Maybe an apt comparison would be "utilitarianism is to decisions I make in my daily life as longtermism is to the decisions I'd make if I were in an influential position with access to many person-years of planning". But this is me trying to guess what another author was thinking; you could consider writing to them directly, too.

(I assume you've heard/considered points of this type before; I'm writing them out here mostly for my own benefit, as a way of thinking through the question.)

It's often laughable. I would think of it like this. Each action can be represented as a polynomial that calculates the value at a time based on time:

v(t) = c1*t^n + c2*t^(n-1 )+...+c3*t+c4

I would think of the value function of the decisions in my life to be the sum of the individual value functions. With every decision I'm presented with multiple functions, and I get to pick one and the coefficients will basically be added into my life's total value function.

Consider foresight to be the ability to predict the end behavior of v for large t. If t=1000 means nothing to you, then c1 is far less important to you than if t=1000 means a lot to you.

Some people probably consciously ignore large t, for example educated people and politicians sometimes make the argument (and many of them certainly believe) that t greater than their life expectancy doesn't matter. This is why the climate crisis has been so difficult to prioritize, especially for people in power who might not have ten years left to live.

But also foresight is an ability. A toddler has trouble consider the importance of t=0.003 (the next day), and because of that no coefficients except for c4 matter. Resisting the entire tub of ice cream is impossible if you can't imagine a stomach ache.

It is unusual, probably even unnatural, to consider t=1000, but it is of course important. The largest t values we can imagine tell us the most about the coefficients for the high degree terms in the polynomial. It is unusual that most of our choices have effects for these coefficients, but some will, or some might, and those should be noticed, highlighted, etc. Until I learned the benefits of veganism, I had almost no consideration for high t values, and I was electrified by the short-term, medium-term, and especially long-term benefits such as avoiding a tipping point for the climate crisis. That was seven years ago and it's faded a little as I'm just passively supporting plant-based meats (consequences are sometimes easier to change than hearts).

What is ? It seems all the work is being done by having  in the exponent.

[+][comment deleted]1y 0

Crowdedness by itself is uninformative. A cause could be uncrowded because it is improperly overlooked, or because it is intractable. Merely knowing that a cause is uncrowded shouldn't lead you to make any updates.

Longtermism is defined as holding that "what most matters about our actions is their very long term effects". What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.

# A model of a longtermist social planner

Consider an infinitely-lived representative agent with population size . In each period there is a risk of extinction via an extinction rate .

The basic idea is that economic growth is a double-edged sword: it increases our wealth, but also increases the risk of extinction. In particular, 'consumption research' develops new technologies , and these technologies increase both consumption and extinction risk.

Here are the production functions for consumption and consumption technologies:

However, we can also develop safety technologies to reduce extinction risk. Safety research produces new safety technologies , which are used to produce 'safety goods' .

Specifically,

The extinction rate is , where the number  of consumption technologies directly increases risk, and the number  of safety goods directly reduces it.

Let .

Now we can set up the social planner problem: choose the number of scientists (vs workers), the number of safety scientists (vs consumption scientists), and the number of safety workers (vs consumption workers) to maximize social welfare. That is, the planner is choosing an allocation of workers for all generations:

The social welfare function is:

The planner maximizes utility over all generations (), weighting by population size , and accounting for extinction risk via . The optimal allocation  is the allocation that maximizes social welfare.

The planner discounts using  (the Ramsey equation), where we have the discount rate , the exogenous extinction risk , risk-aversion  (i.e., diminishing marginal utility), and the growth rate .  (Note that  could be time-varying.)

Here there is no pure time preference; the planner values all generations equally. Weighting by population size means that this is a total utilitarian planner.

### Defining longtermism

With the model set up, now we can define longtermism formally. Recall the informal definition that "what most matters about our actions is their very long term effects". Here are two ways that I think longtermism can be formalized in the model:

(1) The optimal allocation in our generation, , should be focused on safety work: the majority (or at least a sizeable fraction) of workers should be in safety research of production, and only a minority in consumption research or production. (Or,  for small values of  (say ) to capture that the next few generations need to work on safety.) This is saying that our time has high hingeyness due to existential risks. It's also saying that safety work is currently uncrowded and tractable.

(2) Small deviations from  (the optimal allocation in our generation) will produce large decreases in total social welfare , driven by generations  (or some large number). In other words, our actions today have very large effects on the long-term future. We could plot  against  for  and some suboptimal alternative , and show that  is much smaller than  in the tail.

While longtermism has an intuitive foundation (being intergenerationally neutral or having zero pure time preference), the commonly-used definition makes strong assumptions about tractability and hingeyness.

This model focuses on extinction risk; another approach would look at trajectory changes.

Also, it might be interesting to incorporate Phil Trammell's work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.

You might be interested in Existential Risk and Growth

My model here is based on the same Jones (2016) paper.

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:

For a baseline, consider  and . Then we get an optimal transfer of . Intuitively, as  (if B gets 10 for sure, don't make any transfer from A to B), and as  (if B gets 0 for sure, split A's endowment equally).

So that's a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we're uncertain about the value of ?

Suppose we think , for some distribution  over . If we maximize expected utility, the problem becomes:

Since the objective function is linear in probabilities, we end up with the same problem as before, except with  instead of . If we know the mean of , we plug it in and solve as before.

So it turns out that this form of uncertainty doesn't change the problem very much.

Questions:
- if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?
- what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?
- how does a stochastic dominance decision theory work here?

if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?

It's possible in a given situation that we're willing to commit to a range of probabilities, e.g.  (without committing to  or any other number), so that we can check the recommendations for each value of  (sensitivity analysis).

I don't think maxmin utility follows, but it's one approach we can take.

what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?

Yes, I think so.

how does a stochastic dominance decision theory work here?

I'm not sure specifically, but I'd expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree  of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)

I think we can justify ruling out all options the maximality rule rules out, although it's very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for  without specifying an actual distribution for , e.g.  is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won't commit to a probability for either.

We need to drop the term "neglected". Neglectedness is crowdedness relative to importance, and the everyday meaning is "improperly overlooked". So it's more precise to refer to crowdedness (\$ spent) and importance separately. Moreover, saying that a cause is uncrowded has a different connotation than saying that a cause is neglected. A cause could be uncrowded because it is overlooked, or because it is intractable; if the latter, it doesn't warrant more attention. But a neglected cause warrants more attention by definition.

Why don't models of intelligence explosion assume diminishing marginal returns? In the model below, what are the arguments for assuming a constant , rather than diminishing marginal returns (eg, ). With diminishing returns, an AI can only improve itself at a dimishing rate, so we don't get a singularity.

https://www.nber.org/papers/w23928.pdf