The EA community has spent a lot of time thinking about transformative AI. In particular, there is a lot of research on x-risks from transformative AI, and on how transformative AI development will unfold. However, advances in AI have many other consequences which appear crucial for guiding strategic decisionmaking in areas besides AI risk, and I haven't seen/found much material about these implications.
Here is one example of why this matters. In the upcoming decades, AI advancements will likely cause substantial changes to what the world looks like. The more the world changes, the less likely it is that research done earlier still applies to that context. The degree to which research is affected by this will depend on the type of research, but I expect the average effect to be relatively large. Therefore, we should discount the value of research in proportion to the expected loss in generalizability over time.
Another way in which AI could influence the value of research is by being able to entirely automate it. If such AI is quick enough, and able to decide what types of research should be done, then there's no role for humans to play in doing research anymore. Thus, from that point onwards, human capital ceases to be useful for research. Furthermore, such AI could redo research that was done until that point, so (to a first approximation) the impact of research done beforehand would cease when AI has these capabilities. Similarly to the previous consideration, it implies that we should discount the value of research (and career capital) over time by the probability of such development occurring.
I suspect that there are many other ways in which AI might affect our prioritization. For example, it could lower the value of poverty reduction interventions (due to accelerated growth), or increase the value of interventions that allow us to influence decisionmaking/societal values. It should also change the relative value of influencing certain key actors, based on how powerful we expect them to become as AI advances.
I'd really appreciate any thoughts on these considerations or links to relevant material!
So nice to see you back on the forum!
I agree with most of your comment, but I am very surprised by some points:
Does this mean that you consider plausible an improvement in productivity of ~100,000 x in a 5 year period in the next 20 years? As in, one hour of work would become more productive than 40 years of full time work 5 years earlier? That seems significantly more transformative than most people would find plausible.
I'm really surprised to read this. Wouldn't interstellar travel close to the speed of light require a huge amount of energy, and a level of technological transformation that again seems much higher than most people expect? At that point it seems unlikely that concepts like "defense-dominant" or "controlling resources" (I assume the matter of the systems?) would still be meaningful, or at least in a way predictable enough to make regulation written before-transformation useful.
If AI goes badly, you could make the exact same argument in the opposite direction. Wouldn't those two effects cancel out, given that we're so uncertain about AI effects on humans?
I don't understand the theory of change for people at AI labs impacting the global factory farming market (including CEOs, but especially the technical staff). After some quick googling, the global factory farmed market size is around 2 trillions of dollars. Being able to influence that significantly would imply a valuation of AI labs that's very significantly larger than the one implied by the current market.