I didn't suggest otherwise.
It sounds like you're arguing that we should estimate 'good done/additional resources' directly (via Fermi estimates), instead of indirectly using the ITN framework. But shouldn't these give the same answer?
And even when you can multiply the three quantities together, I feel like speaking in terms of importance, neglectedness and tractability might make you feel that there is no total ordering of intervention (“some have higher importance, some have higher tractability, whether you prefer one or the other is a matter a personal taste”)
I don't follow this. If you multiply I*T*N and get 'good done/additional resources', how is that not an ordering?
There seems to be a "intentions don't matter, results do" lesson that's relevant here. Intending to solve AI alignment is secondary, and doesn't mean that you're making progress on the problem.
And we don't want people saying "I'm working on AI" just for the social status, if that's not their comparative advantage and they're not actually being productive.
Hm, then I find necessitarianism quite strange. In practice, how do we identify people who exist regardless of our choices?
The longtermist claim is that because humans could in theory live for hundreds of millions or billions of years, and we have potential to get the risk of extinction very almost to 0, the biggest effects of our actions are almost all in how they affect the far future. Therefore, if we can find a way to predictably improve the far future this is likely to be, certainly from a utilitarian perspective, the best thing we can do.
I don't find this framing very useful. The importance-tractability-crowdedness framework gives us a sophisticated method for evaluating... (read more)
Because of this heavy tailed distribution of interventions
Is it actually heavy-tailed? It looks like an ordered bar chart, not a histogram, so it's hard to tell what the tails are like.
What do you think of the Bayesian solution, where you shrink your EV estimate towards a prior (thereby avoiding the fanatical outcomes)?
The three groups have completely converged by the end of the 180 day period
I find this surprising. Why don't the treated individuals stay on a permanently higher trajectory? Do they have a social reference point, and since they're ahead of their peers, they stop trying as hard?
Is the difference between actualism and necessitarianism that actualism cares about both (1) people who exist as a result of our choices, and (2) people who exist regardless of our choices; whereas necessitarianism cares only about (2)?
I wonder if we can back out what assumptions the 'peace pact' approach is making about these exchange rates. They are making allocations across cause areas, so they are implicitly using an exchange rate.
I get the weak impression that worldview diversification (partially) started as an approximation to expected value, and ended up being more of a peace pact between different cause areas. This peace pact disincentivizes comparisons between giving in different cause areas, which then leads to getting their marginal values out of sync.
Do you think there's an optimal 'exchange rate' between causes (eg. present vs future lives, animal vs human lives), and that we should just do our best to approximate it?
Have you seen this?
If we don't kill ourselves in the next few centuries or millennia, almost all humans that will ever exist will live in the future.
The idea is that, after a few millenia, we'll have spread out enough to reduce extinction risks to ~0?
Even without considering that, if we stay at ~140 million births per year, in 800 years 50% of all humans will have been born in our future.
And in ~7 millennia 90% of all humans will have been born in our future.
Nice work! Sounds like movement building is very important.
Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?
I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken.
What do you think about MacAskill's claim that "there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear."?
Do you think FTX funding lead elimination is a mistake, and that they should do patient philanthropy instead?
Also, how are you defining "longtermist" here? You seem to be using it to mean "focused on x-risk".
I think that these factors might be making it socially harder to be a non-longtermist who engages with the EA community, and that is an important and missing part of the ongoing discussion about EA community norms changing.
Although note that Will MacAskill supports lead elimination from a broad longtermist perspective:
... (read more)Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stu
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It's only when the probabilities are tiny that you get differences in optimal policy.
Appreciate your support!
Using in is assuming constant returns to scale. If you have , you get diminishing returns.
Messing around with some python code:
from scipy.stats import norm
import numpy as np
def risk_reduction(K,L,alpha,beta):
print('risk:', norm.cdf(-(K**alpha)*(L**beta)))
print('expected value:', 1/norm.cdf(-(K**alpha)*(L**beta)))
print('risk (2x):', norm.cdf(-((2*K)**alpha)*(L**beta)))
print('expected value (2x):', 1/norm.cdf(-((2*K)**alpha)*(L**beta)))
print('ratio:',(1/norm.cdf(-((2... (read more)
Are you using ?
Agreed, that's another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
I agree that it's a difficult problem, but I'm not sure that it's impossible.
Yes, I think of EA as optimally allocating a budget to maximize social welfare, analogous to the constrained utility maximization problem in intermediate microeconomics.
The worldview diversification problem is in putting everything in common units (eg. comparing human and animal lives, or comparing current and future lives). Uncertainty over these 'exchange rates' translates into uncertainty in our optimal budget allocation.
Yes, it sounds like MacAskill's motivation is about PR and community health ("getting people out of bed in the morning"). I think it's important to note when we're funding things because of direct expected value, vs these indirect effects.
Does longtermism vs neartermism boil down to cases of tiny probabilities of x-risk?
When P(x-risk) is high, then both longtermists and neartermists max out their budgets on it. We have convergence.
When P(x-risk) is low, then the expected value is low for neartermists (since they only care about the next ~few generations) and high for longtermists (since they care about all future generations). Here, longtermists will focus on x-risks, while neartermists won't.
Do we know the expected cost for training an AGI? Is that within a single company's budget?
Nearly impossible to answer. This report by OpenPhil gives it a hell of an effort, but could still be wrong by orders of magnitude. Most fundamentally, the amount of compute necessary for AGI might not be related to the amount of compute used by the human brain, because we don’t know how similar our algorithmic efficiency is compared to the brain’s.
As you note, the key is being able to precisely select applicants based on altruism:
... (read more)This tension also underpins a frequent argument made by policymakers that extrinsic rewards should be kept low so as to draw in agents who care sufficiently about delivering services per se. A simple conceptual framework makes precise that, in line with prevailing policy concerns, this attracts applicants who are less prosocial conditional on a given level of talent. However, since the outside option is increasing in talent, adding career benefits will draw in more talented
Why does your graph have financial motivation as the y-axis? Isn't financial motivation negatively correlated with altruism, by definition? In other words, financial motivation and altruism are opposite ends of a one-dimensional spectrum.
I would've put talent on the y-axis, to illustrate the tradeoff between talent and altruism.
So perhaps EA orgs can raise salaries and attract more-talented-yet-equally-commited workers. (Though this effect would depend on the level of the salary.)
Let be the computing power used to train the model. Is the idea that "if you could afford to train the model, then you can also afford for running models"?
Because that doesn't seem obvious. What if you used 99% of your budget on training? Then you'd only be able to afford for running models.
Or is this just an example to show that training costs >> running costs?
Related:
"Losing Prosociality in the Quest for Talent? Sorting, Selection, and Productivity in the Delivery of Public Services"
By Nava Ashraf, Oriana Bandiera, Edward Davenport, and Scott S. Lee
Abstract:
... (read more)We embed a field experiment in a nationwide recruitment drive for a new health care position in Zambia to test whether career benefits attract talent at the expense of prosocial motivation. In line with common wisdom, offering career opportunities attracts less prosocial applicants. However, the trade-off exists only at low levels of talent; the marginal app
Basically, is the computing power for training a fixed cost or a variable cost? If it's a fixed cost, then there's no further cost to using the same computing power to train models.
once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.
How does computing power work here? Is it:
In (2), we might use up our whole budget on the training, and then not be able to afford to run any models.
Great comment. Perhaps it would be helpful to explicitly split the analysis by assumptions about takeoff speed? It seems that conditional on takeoff speed, there's not much disagreement.
This paper makes that point about linear regressions in general.
Re: discount factor, longtermists have zero pure time preference. They still discount for exogenous extinction risk and diminishing marginal utility.
See: https://www.cambridge.org/core/journals/economics-and-philosophy/article/discounting-for-public-policy-a-survey/4CDDF711BF8782F262693F4549B5812E
... (read more)I’m very unsure how many people and how much funding the effective altruism community should be allocating to nuclear risk reduction or related research, and I think it’s plausible we should be spending either substantially more or substantially less labor and funding on this cause than we currently are (see also Aird & Aldred, 2022a).[6] And I have a similar level of uncertainty about what “intermediate goals”[7] and interventions to prioritize - or actively avoid - within the area of nuclear risk reduction (see Aird & Aldred, 2022b). Th
Relevant, by @HaydnBelfield:
One possible response is about long vs short AI timelines, but that seems orthogonal to longtermism/neartermism.
Our AI focus area is part of our longtermism-motivated portfolio of grants,[2] and we focus on AI alignment and AI governance grantmaking that seems especially helpful from a longtermist perspective. On the governance side, I sometimes refer to this longtermism-motivated subset of work as "transformative AI governance" for relative concreteness, but a more precise concept for this subset of work is "longtermist AI governance."[3]
What work is "from a longtermist perspective" doing here? (This phrase is used 8 times in the article.) Is it: longtermists have ... (read more)
even if we’re coming from a position that thinks they’re not the most effective causes
How do you interpret "most effective cause"? Is it "most effective given the current funding landscape"?
The EAecon Retreat will be a ~30-person retreat for facilitating connections between EA economists of all levels. [...] We are open to applications from advanced undergraduate, master's, and early-stage Ph.D. students interested in EAeconomics who may not have yet been exposed to the area more in-depth.
So 'all levels' does not include late-stage or post-PhD economists?
Great to see this!
Related, John von Neumann on x-risk:
... (read more)