Hide table of contents

The EA community has spent a lot of time thinking about transformative AI. In particular, there is a lot of research on x-risks from transformative AI, and on how transformative AI development will unfold. However, advances in AI have many other consequences which appear crucial for guiding strategic decisionmaking in areas besides AI risk, and I haven't seen/found much material about these implications.

Here is one example of why this matters. In the upcoming decades, AI advancements will likely cause substantial changes to what the world looks like. The more the world changes, the less likely it is that research done earlier still applies to that context. The degree to which research is affected by this will depend on the type of research, but I expect the average effect to be relatively large. Therefore, we should discount the value of research in proportion to the expected loss in generalizability over time.

Another way in which AI could influence the value of research is by being able to entirely automate it. If such AI is quick enough, and able to decide what types of research should be done, then there's no role for humans to play in doing research anymore. Thus, from that point onwards, human capital ceases to be useful for research. Furthermore, such AI could redo research that was done until that point, so (to a first approximation) the impact of research done beforehand would cease when AI has these capabilities. Similarly to the previous consideration, it implies that we should discount the value of research (and career capital) over time by the probability of such development occurring.

I suspect that there are many other ways in which AI might affect our prioritization. For example, it could lower the value of poverty reduction interventions (due to accelerated growth), or increase the value of interventions that allow us to influence decisionmaking/societal values. It should also change the relative value of influencing certain key actors, based on how powerful we expect them to become as AI advances.

I'd really appreciate any thoughts on these considerations or links to relevant material!

New Answer
New Comment

2 Answers sorted by

I think this is an excellent question and hasn’t (yet) received the discussion it deserves. Below are a few half-baked thoughts.

The last couple of years have significantly increased my credence that we’ll see explosive growth as a result of AI within the next 20 years. If this happens, it’ll raise a huge number of different challenges; human extinction at the hands of AI is obviously one. But there are others, too, even if we successfully avoid extinction, such as by aligning AI or coordinating to ensure that all powerful AI systems are limited in their capacities in some way (for example, by lacking long-term planning or theory of mind, or being subject to constant monitoring by AI law enforcement).

One framing is:

  • Think of all the technological challenges that we’d face over the coming 500 years, on a business-as-usual 1-5% per year growth rate.
  • Now imagine that that occurs over the course of 5 years rather than 500.
  • And now imagine we only have months to years to respond to each new challenge, rather than decades.

A few things leap out to me:

  1. One’s biorisk estimates should go higher because:
    1. We’d develop even better biotech than we would otherwise (in the same time period).
    2. There is little institutional time to respond.
    3. The technology is even more likely to be highly democratised. (E.g. open-source models able to answer highly sophisticated questions about biotech available to everyone.) 
  2. The point of time at which we expand beyond our solar system might be within our lifetimes. This could be one of the most influential moments in human history: the speed of light sets an upper bound on how fast you can go, so who leaves first at max-speed gets there first. And, plausibly, solar systems are defense-dominant so whoever gets there first controls the resources they reach indefinitely. 
    1. To my knowledge there’s been almost no work done on what governance regime should be in place to guide this, and an anarchic land grab doesn’t seem ideal.  
    2. This warrants at least some work on thinking through governance, building up a coalition of people (across countries, labs, and at the UN) who take this seriously. I’m hoping to do some work on this topic this year. (This relates to discussion of the long reflection, which has remained hand-wavy to date.)
  3. Relatedly, laws around capital ownership. If almost all economic value is created by AI, then whoever owns the aligned AI (and hardware, data, etc) would have almost total economic power. Similarly, if  all military power is held by AI, then whoever owns the AI would have almost total military power. In principle this could be a single company or a small group of people. We could try to work on legislation in advance to more widely share the increased power from aligned AI. 
  4. More generally, in-advance discussion of what sort of world we’re aiming for in a post-AGI future seems helpful.
    1. One counterargument is: “This discussion will all become irrelevant post-superintelligence.” And that might be right. But I’m not confident that’s right. And, even if so, progress on this question can make things go better pre-superintelligence. For example, it’s plausible to me that, with cooperation and trade, a wide range of value systems can get 90%+ of what they value post-superintelligence. If this is right, and became widely known, then it could reduce the desire for different actors to race. 
  5. One’s estimate of existential risks from unknown tech within our lifetimes should go higher, too. This favours some amount of general-purpose responsive infrastructure, like crisis response teams within government, cash reserves that can be drawn upon, and pre-approved emergency policy regimes. It also favours, more generally, governments that can nimbly respond in times of crisis, and philanthropists holding on to savings.
  6. Work on certain key ethical issues, like the moral status of digital beings and concern for digital sentience, seems even more important.  These might seem wacky now, but might be very real in the next couple of decades. 
  7. Scenario planning to extend this list seems very important.

Then, in terms of impacts on other existing EA cause areas:

  • Global health and development:
    • Near-term explosive economic growth would greatly increase the value of saving lives relative to the value of improving lives or generally making poor people richer. If AI goes well, then it could greatly extend currently-existing lives, and greatly increase their quality of life, too. On a person-affecting view of population ethics (which you should give some weight to, and might be why you focus on this cause area) that’s huge. The benefit to the child of saving a child’s life would be much greater than just ~40 QALYs, perhaps many orders of magnitude greater.
    • The medium-term benefits of global development would be smaller. Even if wealth is highly concentrated after superintelligence, there would probably be trickle-down benefits that, in absolute terms, would be transformative for the world’s poor; previous economic gains would be largely washed out. So the discount rate should be higher. This is relevant for, say, deworming versus bednets, though the previous consideration about saving versus benefitting lives is much larger.
  • Farm animal welfare: 
    • Some animal welfare work is about changing values and attitudes to non-human beings. This looks more promising than it did, especially among key decision-makers (e.g. politicians, people at AI labs), because those value changes could significantly affect the long-term future, for example via attitudes to digital sentience.  (Though there’s a significant worry that any actions on this will get washed out, because the volume and quality of information and arguments that people are exposed to will potentially be much greater after the advent of advanced AI, and have a much bigger impact on their views and values.)
    • Activities with medium-term payoff, like advancing technology on cultivated meat, don’t look as promising. The technology will get created soon anyway, and will make a big dent into rates of animal farming; the costs to raise animals more humanely will also decrease post-superintelligence.  

I’m confident that the above list of considerations is extremely non-exhaustive! 

Will -- many of these AGI side-effects seem plausible -- and almost all are alarming, with extremely high risks of catastrophe and disruption to almost every aspect of human life and civilization.

My main take-away from such thinking is that human individuals and institutions have very poor capacity to respond to AGI disruptions quickly, decisively, and intelligently enough to avoid harmful side-effects. Even if the AGI is technically 'aligned' enough not to directly cause human extinction, its downstream technological, economic, and cultural side-effects seem so dangerously unpredictable that we are very unlikely to manage them well.

Thus, AGI would be a massive X-risk amplifier in almost every other domain of human life. As I've argued many times, whatever upsides we can reap from AGI will still be there in a century, or a millennium, but whatever downsides are imposed by AGI could start hurting us within a few years. There's a huge temporal asymmetry to consider. (Maybe we can solve alignment in the next few centuries, and we'd feel reasonably safe proceeding with AGI research. But maybe not. There's every reason to take our time when we're facing a Great Filter.)

Therefore it seem... (read more)

It's very interesting to have your views on this.

Another question: Would you be worried that the impact of humanity on the world (more precisely, industrial civilization) could be net-negative if we aligned AI with human values ?

One of my fears is that if we include factory farms in the equation, humanity causes more suffering than wellbeing, simply because animals are more numerous than humans and often have horrible lives. (if we include wild animals, this gets more complicated). 
So if we were to align AI with human values only, this would boost factory farming and keep it running for a long time, making the overall situation much worse.

I'm aware that cultivated meat could help solve the issue, but this seems far from automatic - many people in animal welfare don't seem so optimistic about that. It could not work out for quite a number of reasons:
https://www.forbes.com/sites/briankateman/2022/09/06/optimistic-longtermism-is-terrible-for-animals/?sh=328a115d2059
https://www.forbes.com/sites/briankateman/2022/12/07/if-we-dont-end-factory-farming-soon-it-might-be-here-forever/?sh=63fa11527e3e 

4
Denkenberger
10mo
Not really - about six hours of the energy produced by the sun.
3
Corentin Biteau
10mo
Well, harnessing ALL of the energy produced by the sun (or even half of it) sounds pretty far away in time. I'll make a disgression: The risk of X-risks seems to increase with the amount of energy at disposal (only a correlation, yes, but a lot of power (=energy) seems necessary to destroy the conditions of life on this planet, and the more power we have, the easier it becomes). As I pointed out, in the book Power, Richard Heinberg makes the case that we are overpowered: we have so much energy that we risk wiping out ourselves by accident. Worse yet, the goal of our current economic and political structures is to get even more power - forever. So I'd expect a society with this amount of power to face many other problems bafore getting to "harnessing the sun". The Fermi paradox seems to point this way.   But even then, this doesn't really adress the point I made above about animal suffering.
2
Denkenberger
10mo
Oh - sorry - I meant to reply to AnonymousAccount instead - it was their text that I was quoting. I've now put it there - should I delete this one?
1
Corentin Biteau
10mo
Yeah, I though it was something like that ^^ But no, let's keep that here.

So nice to see you back on the forum!

I agree with most of your comment, but I am very surprised by some points:

  • Think of all the technological challenges that we’d face over the coming 500 years, on a business-as-usual 1-5% per year growth rate.
  • Now imagine that that occurs over the course of 5 years rather than 500.

Does this mean that you consider plausible an improvement in productivity of ~100,000 x in a 5 year period in the next 20 years? As in, one hour of work would become more productive than 40 years of full time work 5 years earlier? That seems... (read more)

2
Denkenberger
10mo
Not really - about six hours of the energy produced by the sun. If molecular manufacturing could double every day (many bacteria double much faster), we would get there very fast.

'Relatedly, laws around capital ownership. If almost all economic value is created by AI, then whoever owns the aligned AI (and hardware, data, etc) would have almost total economic power. Similarly, if  all military power is held by AI, then whoever owns the AI would have almost total military power. In principle this could be a single company or a small group of people. We could try to work on legislation in advance to more widely share the increased power from aligned AI. '

I'm a bit worried that even if on paper ownership of AI is somehow spread over a large proportion of the population, people who literally control the AI could just ignore this. 

On point 2, re: defense-dominant vs. offense-dominant future technologies - even if technologies are offense-dominant, the original colonists of a solar system are likely to maintain substantial control over settled solar systems, because even if they tend to lose battles over those systems, antimatter or other highly destructive weapons can render the system useless to would-be conquerors.

In general I expect interstellar conflict to look vaguely Cold War-esque in the worse cases, because the weapons are likely to be catastrophically powerful, hard to defend against (e.g. large bodies accelerated to significant fractions of lightspeed), and visible after launch, with time for retaliation (if slower than light).

I think it mostly means that you should be looking to get quick wins. When calculating the effectiveness of an intervention, don't assume things like "over the course of an 85-year lifespan this person will be healthier due to better nutrition now." or "this person will have better education and thus more income 20 years from now." Instead just think: How much good does this intervention accomplish in the next 5 years? (Or if you want to get fancy, use e.g. a 10%/yr discount rate)

See Neartermists should consider AGI timelines in their spending decisions - EA Forum (effectivealtruism.org)

 

Curated and popular this week
Relevant opportunities