All of CarlShulman's Comments + Replies

Artificial Suffering and Pascal's Mugging: What to think?

And once I accept this conclusion, the most absurd-seeming conclusion of them all follows. By increasing the computing power devoted to the training of these utility-improved agents, the utility produced grows exponentially (as more computing power means more digits to store the rewards). On the other hand, the impact of all other attempts to improve the world (e.g. by improving our knowledge of artificial sentience so we can more efficiently promote their welfare) grows at only a polynomial rate with the amount of resource devoted into these attempts. The

... (read more)
Towards a Weaker Longtermism

FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it when I did if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even... (read more)

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 


I agree with this, and the example of Astronomical Waste is particularly notable. (As I u... (read more)

Economic policy in poor countries

Alexander Berger discusses this at length in a recent 80,000 Hours podcast interview with Rob Wiblin.

2RyanCarey2moOne excerpt worth quoting (emphasis added): The most relevant comments in the transcript seem to be in the section "GiveWell’s top charities are (increasingly) hard to beat".

I do think it is a key pillar of EA that there is open public discussion of arguments for and against different positions. I haven't seen much engagement with the case for focusing on economic growth. 

What grants has Carl Shulman's discretionary fund made?

Last update is that they are, although there were coronavirus related delays.

What is an example of recent, tangible progress in AI safety research?

Focusing on empirical results:

Learning to summarize  from human feedback was good, for several reasons.

I liked the recent paper empirically demonstrating objective robustness failures hypothesized in earlier theoretical work on inner alignment.


3Mark Xu4monit: link on "reasons" was pasted twice. For others it's [] Also hadn't seen that paper. Thanks!
Help me find the crux between EA/XR and Progress Studies

Side note:  Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).

I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.

Help me find the crux between EA/XR and Progress Studies

By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.

Help me find the crux between EA/XR and Progress Studies

Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.

I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent ... (read more)

My attempt to think about AI timelines

Robin Hanson argues in Age of Em  that annualized  growth rates will reach over 400,000% as a result of automation of human labor with full substitutes (e.g. through brain emulations)! He's a weird citation for thinking the same technology can't manage 20% growth.

"I really don't have strong arguments here. I guess partly from experience working on an automated trading system (i.e. actually trying to automate something)"

This and the usual economist arguments against fast AGI growth  seem to be more about denying the premise of ever succeeding... (read more)

My attempt to think about AI timelines

I find that 57% very difficult to believe. 10% would be a stretch. 

Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by  millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped ... (read more)

Thanks for these comments and for the chat earlier!

  • It sounds like to you, AGI means ~"human minds but better"* (maybe that's the case for everyone who's thought deeply about this topic, I don't know). On the other hand, the definition I used here, "AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it", falls well short of that on at least some reasonable interpretations. I definitely didn't mean to use an unusually weak definition of AGI here (I was partly basing it
... (read more)
Why AI is Harder Than We Think - Melanie Mitchell

She does talk about century plus timelines here and there.

How do you compare human and animal suffering?

I suspect there are biases in the EA conversation where hedonistic-compatible arguments get discussed more than reasons that hedonistic utilitarians would be upset by, and intuitions coming from other areas may then lead to demand and supply subsidies for such arguments.

How do you compare human and animal suffering?

"I would guess most arguments for global health and poverty over animal welfare fall under the following:

- animals are not conscious or less conscious than humans
- animals suffer less than humans


I'm pretty skeptical that these arguments descriptively  account for most of the people explicitly choosing global poverty interventions over animal welfare interventions, although they certainly account for some people. Polls show  wide agreement that birds and mammals are conscious and have welfare to at least some degree. And I think most models on wh... (read more)

7MichaelStJules6moI agree with this for the broader philanthropic community, but I had the EA community in mind specifically. I think just speciesism and rationalization of eating animals account for most of the differences in society and charity broadly. I think most of the other reasons you give wouldn't fit the EA community, especially given how utilitarian we are. The people who have thought about the issues will give answers related to consciousness and intensity of experience, and maybe moral status like Kagan as you mention. I suppose many newer EAs will not have thought about the issues much at all, though, and so could still have more speciesist views. I think half of EAs in the last EA survey were vegetarian or vegan, though. I might have underestimated how much EAs prioritizing global health and poverty do so for the better evidence base, and the belief that it is more cost-effective with pretty skeptical prior.
What grants has Carl Shulman's discretionary fund made?

Hi Milan,

So far it has been used to back the donor lottery (this has no net $ outlay in expectation, but requires funds to fill out each block and handle million dollars swings up and down), make a grant to  ALLFED, fund Rethink Priorities' work on nuclear war,  and small seed funds for some researchers investing two implausible but consequential if true interventions (including the claim that creatine supplements boost cognitive performance for vegetarians).

Mostly it remains invested. In  practice I have mostly been able to recommend major ... (read more)

Are you or the grantee planning to publish the results of the creatine investigation? I think it would be helpful for many in the community, even if it's a null result.

5Milan_Griffes7moThanks for this update – these seem like worthwhile things to invest in! Do you have a sense of how you will structure reporting on future grantmaking from this fund?
The Upper Limit of Value

There is some effect in this direction, but not a sudden cliff. There is plenty of room to generalize, not an in. We create models of alternative coherent lawlike realities, e.g. the Game of Life or and physicists interested in modeling different physical laws. 

The Upper Limit of Value

Thanks David, this looks like a handy paper! 

Given all of this, we'd love feedback and discussion, either as comments here, or as emails, etc.

I don't agree with the argument that infinite impacts of our choices are of Pascalian improbability, in fact I think we probably face them as a consequence of one-boxing decision theory, and some of the more plausible routes to local infinite impact are missing from the paper:

  • The decision theory section misses the simplest argument for infinite value: in an infinite inflationary universe with infinite copi
... (read more)
5Pablo9moThe main reason for taking the simulation hypothesis seriously is the simulation argument, but that argument needs to assume that our physical models are broadly correct about reality itself and not just the "physics" of the simulation. Otherwise, there would be no warrant for drawing inferences from simulated sense data about the behavior of agents in reality, including whether these agents will choose to run ancestor simulations.
2Davidmanheim9moThanks, and thanks for posting this both places. I've responded on the lesswrong post [] , and I'm going to try to keep only one thread going, given my finite capacity to track things :)
Can I have impact if I’m average?

Here are two posts from Wei Dai, discussing the case for some things in this vicinity (renormalizing in light of the opportunities):

What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?

Thanks for this detailed post on an underdiscussed topic!  I agree with the broad conclusion that extinction via partial population collapse and infrastructure loss, rather than by the mechanism of catastrophe being potent  enough to leave no or almost no survivors (or indirectly enabling some  later extinction level event) has very low probability.  Some comments:

  • Regarding case 1, with a pandemic leaving 50% of the population dead but no major infrastructure damage, I think you can make much stronger claims about there not being 'civil
... (read more)

Regarding case 1, with a pandemic leaving 50% of the population dead but no major infrastructure damage, I think you can make much stronger claims about there not being 'civilization collapse' meaning near-total failure of industrial food, water, and power systems. Indeed, collapse so defined from that stimulus seems nonsensical to me for rich quantitative reasons.


If there were a pandemic heading toward 50% population fatality, I think that it is likely that workers would not show up to critical industries and there would be a collapse of industrial ... (read more)

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness)).

A common scale isn't necessary for my conclusion (I think you're substituting it for a stronger claim?)  and  I didn't invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability  get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don't amount to dictator... (read more)

4MichaelStJules10moAh, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you're comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.
Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

Just a clarification: s-risks (risks of astronomical suffering) are existential risks. 

This is not true by the definitions given in the original works that defined these terms. Existential risk is defined to only refer to things that are drastic relative to the potential of Earth-originating intelligent life:

where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Any X-risks are going to be in the same ballpark of importance if they occur, and immensely important to the h... (read more)

2antimonyanthony10moThis is a fair enough critique. But I think that from the perspective of suffering-focused and many other non-total-symmetric-utilitarian value systems, the definition of x-risk is just as frustrating in its breadth. To such value systems, there is a massive moral difference between the badness of human extinction and a locked-in dystopian future, so they are not necessarily in "the same ballpark of importance." The former is only critical to the upside potential of the future if one has a non-obvious symmetric utilitarian conception of (moral) upside potential, or certain deontological premises that are also non-obvious.
2MichaelStJules10moFair enough on the definitions. I had this talk [] in mind, but Max Daniel made a similar point about the definition in parentheses. I'm not sure people have cases like astronomical numbers of (not extremely severe) headaches in mind, but I suppose without any kind of lexicality, there might not be any good way to distinguish. I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views. EDIT: below was based on a misreading. This to me requires pretty specific assumptions about how to deal with moral uncertainty. It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness), but that too could lead to fanaticism if you give any credence to lexicality. While I think there's an intuitive case for it when comparing certain theories (e.g. suffering should be valued roughly the same regardless of the theory), assuming a common scale also seems like the most restrictive approach to moral uncertainty among those discussed in the literature, and I'm not aware of any other approach that would lead to your conclusion. If you gave equal weight to negative utilitarianism and classical utilitarianism, for example, and used any other approach to moral uncertainty, it's plausible to me that s-risks would come out ahead of x-risks (although there's some overlap in causes, so you might work on both). You could even go up a level to and use a method for moral uncertainty for your uncertainty over which approach to moral uncertainty to use on normative theories, and as long as you don't put most of your credence in a common scale approach, I don't think your conclusion would follow.
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

$1B commitment attributed to Musk early on is different from the later Microsoft investment. The former went away despite the media hoopla.

CEA's 2020 Annual Review

It's invested in unleveraged index funds, but was out of the market for the pandemic crash and bought in at the bottom. Because it's held with Vanguard as a charity account it's not easy to invest as aggressively as I do my personal funds for donation, in light of lower risk-aversion for altruistic investors than those investing for personal consumption, although I am exploring options in that area.

The fund has been used to finance the CEA donor lottery, and to make grants to ALLFED and Rethink Charity (for nuclear war research). However, it should be note... (read more)

2BrianTan10moGot it, thanks for the context! I'm curious if you have a target % return for this fund per year with your investing, and what your target % return is for your personal funds for donation? I also wonder if you think EAs you know perform better with their investment returns than the average investor.
If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant

Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. 

I don't think any major EA or longtermist institution believes this about expected impact for 10^30 differences. There are too many spillovers for that, e.g. if doubling the world economy of $100 trillion/yr would modestly shift x-risk or the fate of wild animals, then interventions that affect economic activity have to have expected absolute value of impact much greater than 10^-30 of the most expected... (read more)

Thoughts on whether we're living at the most influential time in history

It's the time when people are most influential per person or per resource.

Thoughts on whether we're living at the most influential time in history

This seems important to me because, for someone claiming that we should think that we're at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does.  To me at least, that's a striking fact and wouldn't have been obvious before I started thinking about these things.

It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particula... (read more)

Nuclear war is unlikely to cause human extinction

I agree it's very unlikely that a nuclear war discharging current arsenals could directly cause human extinction. But the conditional probability of extinction given all-out nuclear war can go much higher if the problem gets worse. Some aspects of this:

-at the peak of the Cold War arsenals there were over 70,000  nuclear weapons, not 14,000
-this Brookings estimate puts spending building the US nuclear arsenal at several trillion current dollars, with lower marginal costs per weapon, e.g. $20M per weapon and $50-100M all-in for for ICBMs
-economic growt... (read more)

5landfish1yThis may be in the Brookings estimate, which I haven't read yet, but I wonder how much cost disease + reduction in nuclear force has affected the cost per warhead / missile. My understanding is that many military weapon systems get much more expensive over time for reasons I don't well understand. Warheads could be altered to increase the duration of radiation effects from fallout, but this would would also reduce their yield, and would represent a pretty large change in strategy. We've gone 70 years without such weapons, which the recent Russian submersible system as a possible exception. It seems unlikely such a shift in strategy will occur in the next 70 years, but like 3% unlikely rather than really unlikely. It's a good point that risks of extinction could get significantly worse if different/more nuclear weapons were built & deployed, and combined with other WMDs. And the existence of 70k+ weapons in the cold war presents a decent outside view argument that we might see that many in the future. I'll edit the post to clarify that I mean present and not future risks from nuclear war.
Thoughts on whether we're living at the most influential time in history

Note that compared to the previous argument, the a prior odds on being the most influential person is now 1e-10, so our earliness essentially increases our belief that we are the most influential by something like 1e28. But of course a 1-in-a-100 billion prior is still pretty low, and you don't think our evidence is sufficiently strong to signficantly reduce it.

The argument is not about whether Will is the most influential person ever, but about whether our century has the best per person influence. With population of 10 billion+ (78 billion alive now, plu... (read more)

3Habryka1y(It appears you dropped a closing parenthesis in this comment)
5djbinder1yI should also point out that, if I've understood your position correctly Carl, I agree with you. Given my second argument, that a prior we have something like 1 in a trillion odds of being the most influential, I don't think we should end up concluding much about this. Most importantly, this is because whether or not I am the most influential person is not actually relevant decision making question. But even aside from this I have a lot more information about the world than just a prior odds. For instance, any long-termist has information about their wealth and education which would make them fairly exceptional compared to the average human that has ever lived. They also have reasonable evidence about existential risk this century and plausible (for some loose definition of plausible) ways to influence this. At the end of the day each of us still has low odds of being the most influential person ever, but perhaps with odds more in the 1 in 10 million range, rather than 1 in a trillion.
3djbinder1yIn his first comment Will says he prefers to frame it as "influential people" rather than "influential times". In particular if you read his article (rather than the blog post), then in the end of section 5 he says he thinks it is plausible that the most influential people may live within the next few thousand years, so I don't his odds that this century is the most influential can be very low (at a guess, one in a thousand?). I might be wrong though; I'd be very curious to know what Will's prior is that the most influential person will be alive this century.
Are we living at the most influential time in history?

Wouldn't your framework also imply a similarly overwhelming prior against saving? If long term saving works with exponential growth then we're again more important than virtually everyone who will ever live, by being in the first n billion people who had any options for such long term saving. The prior for 'most important century to invest' and 'most important century to donate/act directly' shouldn't be radically uncoupled.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.

If you look at OpenAI's annual filings, it looks like the $1b did not materialize.

4RyanCarey10moWhich annual filings? Presumably the investment went to the for-profit component.
Towards zero harm: animal-free and land-free food

Thanks for pointing out that paper. Yes, it does seem like some of these companies are relying on cheap hydropower and carbon pricing.

If photovoltaics keep falling in price they could ease the electricity situation, but their performance would be degraded in nuclear winter (although not in some other situations interfering with conventional agriculture).


Towards zero harm: animal-free and land-free food

Three forerunners are Air Protein (US), Solar Foods (Finland) and the Utilization of Carbon Dioxide Institute (Japan).

Thanks, I was familiar with the general concept here, and specific companies working with methane, but not the electrolysis based companies. I had thought that wouldn't be practical given the higher price of electrolysis hydrogen vs natural gas hydrogen.

 A production cost of $5-$6 per kilogram of 100 percent protein. It aims to have Solein on the market and in millions of meals by 2021, but before then it needs to

... (read more)

We found that the economics of hydrogen single cell protein could be promising in a catastrophe if it had low cost energy. Basically look at where aluminum refining is done-cheap hydropower or coal (which could have carbon sequestration).

Which is better for animal welfare, terraforming planets or space habitats? And by how much?

I think this has potential to be a crucial consideration with regard to our space colonization strategy

I see this raised often, but it seems like it's clearly the wrong order of magnitude to make any noticeable proportional difference to the broad story of a space civilization, and I've never seen a good counterargument to that point.

Wikipedia has a fine page on orders of magnitude for power.  Solar energy received by Earth from the Sun is 1.740*10^17 W, vs 3.846*10^26W for total solar energy output, a difference of 2 billion times. Mars is further fr... (read more)

The scale of direct human impact on invertebrates

Thanks for the interesting post. Could you say more about the epistemic status of agricultural pesticides as the largest item in this category, e.g. what chance that in 3 years you would say another item (maybe missing from this list) is larger? And what ratio do you see between agricultural pesticides and other issues you excluded from the category (like climate change and partially naturogenic outcomes)?

5abrahamrowe1y(Probabilities are ballpark guesses, not rigorous) Just in terms of insects impacted because trying to estimate nematodes or other microscopic animals gets really tricky: Today: >99% agricultural pesticides are the largest direct cause of insect mortality 3 years: >98% agricultural pesticides are the largest direct cause of insect mortality 20 years: >95% likely agricultural pesticides are the largest direct cause of insect mortality The one possible category I could imagine overtaking agricultural pesticides are insects raised for animal feed. I think it is fairly unlikely farming insects for human food will grow substantially, but much more likely that insects raised for poultry feed will grow in number a lot, and even more likely that insects raised for fish feed will grow a lot. There is a lot of venture capital going into raising insects for animal feed right now [] , so it seems at least somewhat likely some of those projects will take off (though there are cost hurdles they haven't cleared yet compared to other animal feeds. Replacing fishmeal with insects seems even more likely because fishmeal is already a lot more expensive than grain feed. Replacing ~40% of fishmeal with black soldier flies would put insect deaths from farming at the lower end of my current estimate for the scale of impact from agricultural pesticides []. So I guess if estimates of agricultural pesticide impact are too high for an unknown reason (maybe insect populations collapse in the near future or something), there is a definite possibility, but not a big one, that insect farming could overtake pesticides in terms of deaths caused. I am very uncertain about this. Brian Tomasik estimates the global terrestrial arthropod population to be 10^17 to 10^19 [
'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions
But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"),

The main dynamic I have in mind there is 'country X being overwhelmingly technologically advantaged/disadvantaged ' treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.

I was putting arms race dynamics lower than
... (read more)
'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

I'd say it's the other way around, because longtermism increases both rewards and costs in prisoner's dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.

On the other hand, effective ba... (read more)

7trammell1ySure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list). I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races. That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.
A New X-Risk Factor: Brain-Computer Interfaces

Thanks for this substantive and useful post. We've looked at this topic every few years in unpublished work at FHI to think about whether to prioritize it. So far it hasn't looked promising enough to pursue very heavily, but I think more careful estimates of the inputs and productivity of research in the field (for forecasting relevant timelines and understanding the scale of the research) would be helpful. I'll also comment on a few differences between the post and my models of BCI issues:

  • It does not seem a safe assumption to me that AGI
... (read more)
'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data abou... (read more)

6trammell1yI agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts: * long-term (but people just care about the short term, and coordination with future generations is impossible), and * global (but governments just care about their own countries, and we don't do global coordination well). So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step. But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.
3Ben_Snodin1yI haven't thought about this angle very much, but it seems like a good angle which I didn't talk about much in the post, so it's great to see this comment. I guess the question is whether you can take the model, including the optimal allocation assumption, as corresponding to the world as it is plus some kind of (imagined) quasi-effective global coordination in a way that seems realistic. It seems like you're pretty skeptical that this is possible (my own inside view is much less certain about this but I haven't thought about it that much). One thing that comes to mind is that you could incorporate into the model spending on dangerous tech by individual states for self-defence into the hazard rate equation through epsilon - it seems like the risk from this should probably increase with consumption (easier to do it if you're rich), so it doesn't seem that unreasonable. Not sure whether this is getting to the core of the issue you've raised, though. I suppose you can also think about this through the "beta and epsilon aren't really fixed" lens that I put more emphasis on in the post. It seems like greater / less coordination (generally) implies more / less favourable epsilon and beta, within the model.
Should We Prioritize Long-Term Existential Risk?
People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.
Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four cen
... (read more)
6MichaelDickens1yThe passage you quoted was just an example, I don't actually think we should use exponential discounting. The thesis of the essay can still be true when using a declining hazard rate. If you accept Toby Ord's numbers of a 1/6 x-risk this century and a 1/6 x-risk in all future centuries, then it's almost certainly more cost-effective to reduce x-risk this century. But suppose we use different numbers. For example, say 10% chance this century and 90% chance in all future centuries. Also suppose short-term x-risk reduction efforts only help this century, while longtermist institutional reform helps in all future centuries. Under these conditions, it seems likely that marginal work on longtermist institutional reform is more cost-effective. (I don't actually think these conditions are very likely to be true.) (Aside: Any assumption of fixed <100% chance of existential catastrophe runs into the problem that now the EV of the future is infinite. As far as I know, we haven't figured out any good way to compare infinite futures. So even though it's intuitively plausible, we don't know if we can actually say that an 89% chance of extinction is preferable to a 90% chance (maybe limit-discounted utilitarianism [] can say so). This is not to say we shouldn't assume a <100% chance, just that if we do so, we run into some serious unsolved problems.)
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

"The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save."

That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn'... (read more)

2trammell1yThanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.

The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and f... (read more)

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Hanson's If Uploads Come First is from 1994, his economic growth given machine intelligence is from 2001, and uploads were much discussed in transhumanist circles in the 1990s and 2000s, with substantial earlier discussion (e.g. by Moravec in his 1988 book Mind Children). Age of Em added more details and has a number of interesting smaller points, but the biggest ideas (Malthusian population growth by copying and economic impacts of brain emulations) are definitely present in 1994. The general idea of uploads as a technology goes back even further.

Age... (read more)

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

My recollection is that back in 2008-12 discussions would often cite the Stern Review, which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate.

In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term im... (read more)

1trammell1ySorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?
Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.

This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.

9trammell1yHanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind? Also, who made the "pure time preference in the interest rate means patient philanthropists should invest" point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)
7MichaelDickens1yIt seems you're right. I did a little searching and found Hanson making that argument here: []
The case for investing to give later
  • My biggest issue is that I don't think returns to increased donations are flat, with the highest returns coming from entering into neglected areas where EA funds are already, or would be after investment, large relative to the existing funds, and I see returns declining closer to logarithmically than flat with increased EA resources;
    • This is not correctly modeled in your guesstimate, despite it doing a Monte Carlo draw over different rates of diminishing returns, because it ignores the correlations between diminishing returns and impact of existing sp
... (read more)
The case for investing to give later

GiveWell top charities are relatively extreme in the flatness of their returns curves among areas EA is active in, which is related to their being part of a vast funding pool of global health/foreign aid spending, which EA contributions don't proportionately increase much.

In other areas like animal welfare and AI risk EA is a very large proportional source of funding. So this would seem to require an important bet that areas with relatively flat marginal returns curves are and will be the best place to spend.

The case for investing to give later

I agree risks of expropriation and costs of market impact rise as a fund gets large relative to reference classes like foundation assets (eliciting regulatory reaction) let alone global market capitalization. However, each year a fund gets to reassess conditions and adjust its behavior in light of those changing parameters, i.e. growing fast while this is all things considered attractive, and upping spending/reducing exposure as the threat of expropriation rises. And there is room for funds to grow manyfold over a long time before even becoming as large as... (read more)

Improving the future by influencing actors' benevolence, intelligence, and power

Thanks for the post. One concern I have about the use of 'power' is that it tends to be used for fairly flexible ability to pursue varied goals (good or bad, wisely or foolishly). But many resources are disproportionately helpful for particular goals or levels of competence. E.g. practices of rigorous reproducible science will give more power and prestige to scientists working on real topics, or who achieve real results, but it also constraint what they can do with that power (the norms make it harder for a scientist who wins stature thereby to p... (read more)

3David_Kristoffersson1yExcellent points, Carl. (And Stefan's as well.) We would love to see follow-up posts exploring nuances like these, and I put them into the Convergence list of topics worth elaborating.
2MichaelA1yYes, I definitely think this is true. And thanks for the comment! I'd say that similar is also true for "intelligence". We use that term for "intellectual abilities or empirical beliefs that would help an actor make and execute plans that are aligned with the actor’s moral beliefs or values". Some such abilities and beliefs will be more helpful for actors with "good" moral beliefs or values than for those with less good ones. E.g., knowledge about effective altruism or global priorities research is likely more useful for someone who aims to benefit the world than for someone who aims to get rich or be sadistic. (Though there can of course be cases in which knowledge that's useful for do-gooders is useful for those trying to counter such do-gooders.) I allude to this when I write: I also alluded to something similar for power, but apparently only in a footnote: One other thing I'd note is that things that are more useful for pursuing good goals than bad ones will, by the uses of terms in this post, increase the power of benevolent actors more than that of less benevolent actors. That's because we define power in relation to what "help[s] an actor execute its plans". So this point was arguably "technically" captured by this framework, but not emphasised or made explicit. (See also Halffull's comment [] and my reply.) I think this is an important enough point to be worth emphasising, so I've added two new footnotes (footnotes 11 and 13) and made the above footnote part of the main-text instead. This may still not sufficiently emphasise this point, and it may often be useful to instead use frameworks/heuristics which focus more directly on the nature of the intervention/tool/change being considered (rather than the nature of the actors it'd be delivered to). But hopefully this edit will help at least a bit. Did you mean thi
9Stefan_Schubert1yRight, so instead of (or maybe in addition to) giving flexible power to supposedly benevolent and intelligent actors (implication 3 above), you create structures, norms, and practices which enable anyone specifically to do good effectively (~give anyone power to do what's benevolent and intelligent).
Load More