This is a special post for quick takes by Michael_Wiebe. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 4:53 PM

New replication: I find that the results in Moretti (AER 2021) are caused by coding errors.
The paper studies agglomeration effects for innovation, but the results supporting a causal interpretation don't hold up.

https://twitter.com/michael_wiebe/status/1749462957132759489

7
Karthik Tadepalli
3mo
You might want to add what the subject of Moretti (2021) is, and what the result is, just so people know if they're interested in learning more.
2
Michael_Wiebe
3mo
Thanks, will edit.

Should you "trust literatures, not papers"?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.

https://twitter.com/michael_wiebe/status/1750572525439062384

Tweet-thread promoting Rotblat Day on Aug. 31, to commemorate the spirit of questioning whether a dangerous project should be continued.

Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.

https://twitter.com/michael_wiebe/status/1750197740603367689

What AI safety work should altruists do? For example, AI companies are self-interestedly working on RLHF, so there's no need for altruists to work on it. (And even stronger, working on RLHF is actively harmful because it advances capabilities.)

I've written up my replication of Cook (2014) on racial violence and patenting by Black inventors.

Bottom line: I believe the conclusions, but I don't trust the results.

https://twitter.com/michael_wiebe/status/1749831822262018476

Has anyone looked at the effect of air pollution on animal welfare (farmed or wild)?

What does longtermism add beyond the importance-tractability-crowdedness framework? According to the ITC framework, we allocate resources to interventions with the highest expected value, given current funding levels. (More precisely, allocate the next dollar to the intervention with the highest marginal utility per dollar.) If those interventions turn out to be aimed at helping future generations, so what?

So far, the effective altruist strategy for global poverty has followed a high-certainty, low-reward approach. GiveWell only looks at charities with a strong evidence base, such as bednets and cash transfers. But there's also a low certainty, high reward approach: promote catch-up economic growth. Poverty is strongly correlated with economic development (urbanization, industrialization, etc), so encouraging development would have large effects on poverty. Whereas cash transfers have a large probability of a small effect, economic growth is a small probability of a large effect. (In general, we should diversify across high- and low-risk strategies.) In short, can we do “hits-based development”?

How can we affect growth? Tractability is the main problem for hits-based development, since GDP growth rates are notoriously difficult to change. However, there are a few promising options. One specific mechanism is to train developing-country economists, who can then work in developing-country governments and influence policy. Lant Pritchett gives the example of a think tank in India that influenced its liberalizing reforms, which preceded a large growth episode. This translates into a concr... (read more)

6
Hauke Hillebrandt
3y
Interesting.  Related: "Some programs have received strong hints that they will be killed off entirely. The Oxford Policy Fellowship, a technical advisory program that embeds lawyers with governments that require support for two years, will have to withdraw fellows from their postings, according to Kari Selander, who founded the program." https://www.devex.com/news/inside-the-uk-aid-cut-97771 https://www.policyfellowship.org/
2
Gordon Seidoh Worley
3y
I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

What is the definition of longtermism, if it now includes traditional global health interventions like reducing lead exposure?

Will MacAskill says (bold added):

Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development

... (read more)

How much do non-nuclear countries exert control over nuclear weapons? How would the US-Soviet arms race have been different if, say, African countries were all as rich as the US, and could lobby against reckless accumulation of nuclear weapons?

Longtermism is the view that positively influencing the longterm future is a key moral priority of our time.

Longtermism is a conclusion we arrive at by applying the EA framework of importance-tractability-crowdedness (where 'importance' is defined to include valuing future lives). Hence, EA is primary, and longtermism is secondary. EA tells us how to manage tradeoffs between benefitting the far future and doing good in the near term, and how to change our behavior as longtermist interventions hit diminishing returns.

Strong waterism: dying of thirst is very bad, because it prevents all of the positive contributions you could make in your life. Therefore, the most important feature of our actions today is their impact on the stockpile of potable water.

2
Linch
2y
Alluding to the diamond-water paradox I assume?
3
Michael_Wiebe
2y
Diminishing marginal returns in both cases.

Will says:

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

If humanity goes extinct this century, that drastically reduces the likelihood that there are humans in our solar system 1000 years from now. So at least in some cases, looking at the effects 1000+ years in the future is pretty straightforward (conditional on the effects over the coming decades).

In order to act for the benefit of the far future (1000+ years away), you don't need to be able to track the far future effects of every possible action. You just need to find at least one course of action whose far future effects are sufficiently predictable to guide you (and good in expectation).

2
Michael_Wiebe
3y
The initial claim is that for any action, we can assess its normative status by looking at its long-run effects. This is a much stronger claim than yours.
3
Aaron Gertler
4y
I don't think Will or any other serious scholar believes that it is "workable". It reads to me like a theoretical assumption that defines a particular abstract philosophy.  "Looking at every possible action, calculating the expected outcome, and then choosing the best one" is also a laughable proposition in the real world, but the notion of "utilitarianism" still makes intuitive sense and can help us weigh how we make decisions (at least, some people think so). Likewise, the notion of "longtermism" can do the same, even if looking 1000 years into the future is impossible.
1
Michael_Wiebe
4y
Sure, but not even close to the same extent.
3
Aaron Gertler
4y
I also find utilitarian thinking to be more useful/practical than "longtermist thinking". That said, I haven't seen much advocacy for longtermism as a guide to personal action, rather than as a guide to research that much more intensively attempts to map out long-term consequences. Maybe an apt comparison would be "utilitarianism is to decisions I make in my daily life as longtermism is to the decisions I'd make if I were in an influential position with access to many person-years of planning". But this is me trying to guess what another author was thinking; you could consider writing to them directly, too. (I assume you've heard/considered points of this type before; I'm writing them out here mostly for my own benefit, as a way of thinking through the question.)
3[anonymous]4y
It's often laughable. I would think of it like this. Each action can be represented as a polynomial that calculates the value at a time based on time: v(t) = c1*t^n + c2*t^(n-1 )+...+c3*t+c4 I would think of the value function of the decisions in my life to be the sum of the individual value functions. With every decision I'm presented with multiple functions, and I get to pick one and the coefficients will basically be added into my life's total value function. Consider foresight to be the ability to predict the end behavior of v for large t. If t=1000 means nothing to you, then c1 is far less important to you than if t=1000 means a lot to you. Some people probably consciously ignore large t, for example educated people and politicians sometimes make the argument (and many of them certainly believe) that t greater than their life expectancy doesn't matter. This is why the climate crisis has been so difficult to prioritize, especially for people in power who might not have ten years left to live. But also foresight is an ability. A toddler has trouble consider the importance of t=0.003 (the next day), and because of that no coefficients except for c4 matter. Resisting the entire tub of ice cream is impossible if you can't imagine a stomach ache. It is unusual, probably even unnatural, to consider t=1000, but it is of course important. The largest t values we can imagine tell us the most about the coefficients for the high degree terms in the polynomial. It is unusual that most of our choices have effects for these coefficients, but some will, or some might, and those should be noticed, highlighted, etc. Until I learned the benefits of veganism, I had almost no consideration for high t values, and I was electrified by the short-term, medium-term, and especially long-term benefits such as avoiding a tipping point for the climate crisis. That was seven years ago and it's faded a little as I'm just passively supporting plant-based meats (consequences are sometimes
3
Michael_Wiebe
4y
What is n? It seems all the work is being done by having n in the exponent.
0[comment deleted]4y

Does longtermism vs neartermism boil down to cases of tiny probabilities of x-risk? 

When P(x-risk) is high, then both longtermists and neartermists max out their budgets on it. We have convergence.

When P(x-risk) is low, then the expected value is low for neartermists (since they only care about the next ~few generations) and high for longtermists (since they care about all future generations). Here, longtermists will focus on x-risks, while neartermists won't.

2
Linch
2y
I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:   * non-longtermism axiologies will all else equal be much more likely to prioritize non-existential GCRs over existential * mitigation (especially worst-case mitigation) for existential risks is comparatively more important for longtermists than for non-longtermists. Some of these divergences were covered at least as early as Parfit (1982). (Note: I did not reread this before making this comment).  I agree that these divergences aren't very strong for traditional AGI x-risk scenarios, in those cases I think whether and how much you prioritize AGI x-risk depends almost entirely on empirical beliefs. 
2
Michael_Wiebe
2y
Agreed, that's another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
2
Michael_Wiebe
2y
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It's only when the probabilities are tiny that you get differences in optimal policy.
2
Charles He
2y
I think you are very interested in cause area selection, in the sense of how these different cause areas can be "rationally allocated" in some sort of normative, analytical model that can be shared and be modified. For example, you might want such a model because you can then modify underlying parameters to create new allocations. If the model is correct and powerful, this process would illuminate what these parameters and assumptions are, laying bare and reflecting underlying insights of the world, and allowing different expression of values and principles of different people.   The above analytical model is in contrast to a much more atheoretical "model", where resources are allocated by  the judgement of a few people who try to choose between causes in a modest and principled way.   I'm not sure your goal is possible.  In short, it seems the best that can be done is for resources to be divided up, in a way bends according to principled but less legible decisions made by the senior leaders. This seems fine, or at least the best we can do. Below are some thoughts about this. The first two points sort of "span or touch on" considerations, while I think Cotra's points are the best place to start from. * The bottom half of this following comment tries to elaborate what is going on, as one of several points, this might be news to you:   * This post by Applied Divinity Studies (which I suspected is being arch and slightly subversive) asking about what EAs on the forum (much less the public) are supposed to do to inform funding decisions, if at all. * (This probably isn't the point ADS wanted to make or would agree with) but my takeaway is that judgement on any cause is hard and valuable and EA forum discussion is underpowered and largely ineffective. * It raises the question: What is the role and purpose of this place? Which I've tried to interrogate and understand (my opinion has fallen with new updates and reached a sort of solipsistic nadir). *
2
Michael_Wiebe
2y
Yes, I think of EA as optimally allocating a budget to maximize social welfare, analogous to the constrained utility maximization problem in intermediate microeconomics.  The worldview diversification problem is in putting everything in common units (eg. comparing human and animal lives, or comparing current and future lives). Uncertainty over these 'exchange rates' translates into uncertainty in our optimal budget allocation.
4
Charles He
2y
Bruh. I just wrote out at least one good reference that EAs can’t really stick things in common units. It’s entirely possible I’m wrong, but from my personal perspective, as a general principle it seems like a good idea to identify where I’m wrong or even just describe how your instincts tell you to do something different, which can be valid. I mean for one thing, you get “fanaticism” AKA “corner solutions” for most reductive attempts to constrain max this thingy.
4
Michael_Wiebe
2y
I agree that it's a difficult problem, but I'm not sure that it's impossible.
2
Charles He
2y
I don't know much about anything really, but IMO it seems really great that you are interested.  There are many people with the same thoughts or interests as you. It will be interesting to see what you come up with. 
2
Michael_Wiebe
2y
Appreciate your support!

Crowdedness by itself is uninformative. A cause could be uncrowded because it is improperly overlooked, or because it is intractable. Merely knowing that a cause is uncrowded shouldn't lead you to make any updates.

FTX Future Fund says they support "ambitious projects to improve humanity's long-term prospects". Does it seem weird that they're unanimously funding neartermist global health interventions like lead elimination?

Will MacAskill:

Lead Exposure Elimination Project. [...] So I saw the talk, I made sure that Clare was applying to  [FTX] Future Fund. And I was like, “OK, we’ve got to fund this.” And because the focus [at FTX] is longtermist giving, I was thinking maybe it’s going to be a bit of a fight internally. Then it came up in the Slack, and everyone w

... (read more)
5
Charles He
2y
LEEP is lead by a very talented team of strong "neartermist" EAs.  In the real world and real EA, a lot of interest and granting can be dependent on team and execution (especially given the funding situation). Very good work and leaders are always valuable.  Casting everything into some longtermist/neartermist thing online seems unhealthy. This particular comment seems poorly written (what does "unanimously" mean?) and seems to pull on some issue, but it just reads that everyone likes MacAskill, everyone likes LEEP and so decided to make a move.
4
Michael_Wiebe
2y
Here's another framing: if you claim that asteroid detection saves 300K lives per $100, pandemic prevention saves 200M lives per $100, and GiveWell interventions save 0.025 lives per $100, isn't it a bit odd to fund the latter?  Or: longtermists claim that what matters most is the very long term effects of our actions. How is that being implemented here? Longtermists make very strong claims (eg. "positively influencing the longterm future is *the* key moral priority of our time"). It seems healthy to follow up on those claims, and not sweep under the rug any seeming contradictions. I chose that word to reflect Will's statement that everyone at FTX was "totally on board", in contrast to his expectations of an internal fight. Does that make sense?
1
Michael_Wiebe
2y
Why wouldn't FTX just refer this to the Global Health and Development Fund?
4
Charles He
2y
My fanfiction (that is maybe "60% true" and so has somewhat more signal than noise) is: The EA fund you mentioned is basically GiveWell.  GiveWell has a sort of institutional momentum,  related to aesthetics about decisions and conditions for funding that make bigger granting costly or harder (alternatively, the deeper reason here is that Global Health and development has a different neglectedness and history of public intervention than any other EA cause area, increasing the bar, but elaborating too much will cause Khorton to hunt me down). In a way that doesn't make GiveWell's choices or institutional role wrong, MacAskill saw that LEEP was great and there was an opportunity here to fund it with his involvement in FTX.   So why FTX?  There's a cheap answer I can make here about "grantmaker diversity", however I don't fully believe this is true (or rather, I'm just clueness). For example, maybe there might be some value in GiveWell having a say in deciding whether to scale up EA global health orgs, like they did scale with Fortify Health. (Not sure about this paragraph, I am sort of wildly LARPing.)  More importantly, this doesn't answer you point about the "longtermist" FTX funding a "neartermist intervention".   So, then, why FTX? This pulls on another thread (or rather one that you pulled on in your other comment).  A part of the answer is that the FTX  "team" believes there is some conjunction between certain cause areas, such as highly cost effective health and development, and longtermist.  A big part of the answer is that this "conjunction" is sort of heavily influenced by the people involved (read: SBF and MacAskill). The issue with pulling on this thread, is that this conjunctiveness isn't perfectly EA canon, it's hard to formalize, and the decisions involved probably puts the senior EA figures involved into too much focus or authority (more anyone, including themselves, want). I want to remind anyone reading this comment is that this is fanfi
4
Charles He
2y
I wrote the above comment because I feel like no one else will.  I feel that some of your comments are stilted and choose content in a way that has interpretations that is confrontational and overbearing, making it too difficult to answer. I view this as a form of bad rhetoric (sort of created by bad forum norms that has produced other pathologies) and doesn't lend itself to truth or good discussion. To be specific, when you say,  and This is terse and omits a lot.  A short direct read of your comments, is that you are implying that  "MacAskill has clearly delineated the cost effectiveness of all EA cause areas/interventions and has ranked certain x-risk as the only principled cost effective one" and  "MacAskill is violating his arrangement of cost effective interventions".  Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable. To be specific, MacAskill is saying in the interview: So, without agreeing or disagreeing with him, MacAskill is saying there is real value to the EA community here with these interventions in several ways. (At the risk of being stilted myself here, maybe you could call this "flow-through effects", good "PR", or just "healthy for the EA soul"). MacAskill can be right or wrong here, but this isn't mentioned at all in this thread of yours that you raised. (Yes, there's some issues with MacAskill's reasoning, but it's less that he's wrong, rather that it's just a big awkward thread to pull on, as mentioned in my comment above.)   I want to emphasize, I personally don't mind the aggressiveness, poking at things.  However, the terseness, combined with the lack of context, not addressing the heart of the matter, is what is overbearing.  The ellipsis here is malign, especially combined with the headspace needed to address all of the threads being pulled at (resulting in this giant two part comment)
6
Michael_Wiebe
2y
Yes, it sounds like MacAskill's motivation is about PR and community health ("getting people out of bed in the morning"). I think it's important to note when we're funding things because of direct expected value, vs these indirect effects.
2
Charles He
2y
I think what you wrote is a fair take.   Just to be clear, I'm pretty sure the idea "The non-longtermist interventions are just community health and PR" is impractical and will be wobbly (a long term weakness) because: * The people leading these projects (and their large communities), who are substantial EA talent, won't at all accept the idea that they are window dressing or there to make longtermists feel good. * Many would find that a slur, and that's not healthiest to propagate from a community cohesion standpoint. * Even if the "indirect effects" model is mostly correct, it's dubious at best who gets to decide which neartermist project is a "look/feel good project" that EA should fund, and this is problematic. * Basically, as a lowly peasant, IMO, I'm OK with MacAskill, Holden deciding this, because I think there is more information about the faculty of these people and how they think and they seem pretty reasonable. *  But having this perspective and decision making apparatus seems  wonky. Like, will neartermist leaders just spend a lot of their time pitching and analyzing  flow through effects?  * $1B a year (to GiveWell) seems large for PR and community health, especially since the spend on EA human capital from those funds is lower than other cause areas   To get a sense of the problems, this post here is centered entirely around the anomaly of EA vegan diets, which they correctly point out doesn't pass a literal cost effectiveness test. They then spend the rest of the post drawing on this to promote their alternate cause area. I think you can see how this would be  problematic and self-defeating if EAs actually used this particular theory of change to fund interventions. So I think drawing the straight line here, that these interventions are just community health and PR, is stilted and probably bad.    MacAskill is making the point that these interventions have value, that longtermists recognize, and that longtermists love this stuff i
4
Michael_Wiebe
2y
My brief response: I think it's bad form to move the discussion to the meta-level (ie. "your comments are too terse") instead of directly discussing the object-level issues.
2
Charles He
2y
Can this really be your complete response to my direct, fulsome answer of your question, which you have asked several times?  For example, can you explain why my lengthy comment isn't a direct object level response? Even much of my second comment is pointing out you omitted that MacAskill expressly answering why he supported funding LEEP, which is another object level response. 
0
Charles He
2y
To be clear, I accuse you of engaging in bad faith rhetoric in your above comment and your last response, with an evasion that I specifically anticipated ("this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply").   Here's some previous comments of yours that are more direct, and do not use the same patterns you are now using, where your views and attitudes are more clear.   If you just kept it in this longtermism/neartermism online thing (and drafted on the sentiment from one of the factions there), that's OK.  This seems bad because I suspect you are entering into unrelated, technical discussions, for example, in economics, using some of the same rhetorical patterns, which I view as pretty bad, especially as it's sort of flying under the radar. 
3
Michael_Wiebe
2y
To be clear, you're using the linguistic sense of 'ellipsis', and not the punctuation mark?
2
Charles He
2y
Yes, that is correct, I am using the linguistic sense, similar to "implication" or "suggestion".

How to make the long-term future go well: get every generation to follow the rule "leave the world better off than it was under the previous generation".

asteroid detection [...] approximately 300,000 additional lives in expectation for each $100 spent. [...]

Preventing future pandemics [...] 200 million extra lives in expectation for each $100 spent. [...]

the best available near-term-focused interventions save approximately 0.025 lives per $100 spent 
(source)

We should have a dashboard that tracks expected value per dollar for each cause area. This could be measured in lives saved, QALYs, marginal utility, etc, and could be measured per $1, $100, $1M, etc. We'd also want an estimate of diminishing retur... (read more)

We need to drop the term "neglected". Neglectedness is crowdedness relative to importance, and the everyday meaning is "improperly overlooked". So it's more precise to refer to crowdedness ($ spent) and importance separately. Moreover, saying that a cause is uncrowded has a different connotation than saying that a cause is neglected. A cause could be uncrowded because it is overlooked, or because it is intractable; if the latter, it doesn't warrant more attention. But a neglected cause warrants more attention by definition.

Mor... (read more)

Longtermism is defined as holding that "what most matters about our actions is their very long term effects". What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.

A model of a longtermist social planner

Consider an infinitely-lived representative agent with population size . In each period there is a risk of extinction via an extinction rate .

The basic idea is that economic growth is a double-edged sword: it inc... (read more)

3
Michael_Wiebe
4y
This model focuses on extinction risk; another approach would look at trajectory changes. Also, it might be interesting to incorporate Phil Trammell's work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.
1
NunoSempere
4y
You might be interested in Existential Risk and Growth
2
Michael_Wiebe
4y
My model here is based on the same Jones (2016) paper.

The argument for longtermism in a nutshell:

First, future people matter. [...] Second, the future could be vast. [...] Third, our actions may predictably influence how well this long-term future goes.

Here, whether longtermism ("positively influencing the long-term future is a key moral priority of our time") is true or false depends on whether our actions can predictably influence the far future. But it's bad to collapse such a rich topic down to a binary true/false. (Imagine having a website IsInfluencingTheLongTermFutureAKeyMoralPriority.com to tell you w... (read more)

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:
... (read more)

3
MichaelStJules
4y
It's possible in a given situation that we're willing to commit to a range of probabilities, e.g. p∈[a,b] (without committing to E[p]=a+b2 or any other number), so that we can check the recommendations for each value of p (sensitivity analysis). I don't think maxmin utility follows, but it's one approach we can take. Yes, I think so. I'm not sure specifically, but I'd expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree  of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)
3
MichaelStJules
4y
I think we can justify ruling out all options the maximality rule rules out, although it's very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for p without specifying an actual distribution for p, e.g. p is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won't commit to a probability for either.

'Longtermism' is the view that positively influencing the long-term future is a key moral priority of our time. [from here]

It seems weird to make an 'ism' out of a currently highly cost-effective cause area. On the ITC framework, we expect these interventions to become less cost-effective as funding is directed to them and they hit diminishing returns and become less tractable. That is, if the EA community is functioning properly, the marginal dollar allocated to each cause will have the same effectiveness (otherwise, we could reallocate funding and do mor... (read more)

1
Thomas Kwa
2y
I think the moral assumptions dominate tractability/crowdedness considerations in practice, if you want to maximize the total QALYs of the universe. The current price of a life saved by malaria nets is $6,000. If we stand to have 10^40 lives, reducing x-risk is better on the margin as long as 0.00000000000000000000000000001% chance of doom is prevented by the next billion dollars, and this will basically always be true. (edit: on anything resembling our current earth, it would stop being true after we're colonizing galaxies or something) Under a moral parliament with fixed weights you also don't get changes in allocation based on cost effectiveness of longtermist interventions, unless some portion of your moral parliament values preventing x-risk roughly as much as saving ~8 billion people. But if it's only 8 billion lives, this portion is just not axiologically longtermist. To have a longtermist portion of your moral parliament stop allocating resources to making the long-term future go well as marginal cost-effectiveness declines, it has to think what's at stake is 1-1000 times as important as saving 8 billion lives. Basically, I'm claiming ITC + "long-term future is astronomically important" is not enough to get the EA community to actually change its "longtermist" interventions in practice, nor is ITC + moral parliament. This doesn't mean we should stop allocating resources to preventing x-risk once it costs $1 billion per 0.0000001% or something, but we do need more assumptions.
0
Michael_Wiebe
2y
So you think there won't be diminishing returns to x-risk interventions?
-5
Thomas Kwa
2y

Why don't models of intelligence explosion assume diminishing marginal returns? In the model below, what are the arguments for assuming a constant , rather than diminishing marginal returns (eg, ). With diminishing returns, an AI can only improve itself at a dimishing rate, so we don't get a singularity.


https://www.nber.org/papers/w23928.pdf

Curated and popular this week
Relevant opportunities