This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. |
|
TLDR: Estimate with what probability (X%) transformative AI will NOT occur by the time you plan on retiring. Save X% as much for retirement as you would have in normal worlds.
Many people have suggested if you have short transformative AI (TAI) timelines, that you shouldn't be saving for retirement at all. Most people are not even thinking about the impact of AI timelines. And some people have even suggested saving more than normal if you have relatively short TAI timelines. I think the truth is somewhere in between. I started doing a complicated model, but in the spirit of Draft Amnesty Week, and because simple heuristics are more likely to get used, I thought I would propose a simple heuristic.
The more complicated model is that in normal worlds, you want to have some confidence that you won't run out of money in retirement. You might want to have that same confidence in potential TAI scenarios. TAI might kill everyone, or everyone might be extremely wealthy, so savings would not matter. One person pointed out that wealth at the singularity might allow you to buy galaxies and do great things with them, but at least if you're altruistic, the impact of reducing existential risk is many orders of magnitude greater. There could be scenarios of TAI where we don't get UBI, or you may not be satisfied with the amount of UBI. So it does make sense to have some savings, especially because that savings is likely to grow rapidly during TAI, and also costs of living are likely to fall (and wages would eventually fall as all jobs are automated).
My simple heuristic is estimating with what probability (X%) TAI will NOT occur by the time you plan on retiring. Save X% as much for retirement as you would have in normal worlds.
My initial calculations indicate that to have the same confidence in not running out of money, you might want to save somewhat more than this heuristic indicates, but most EAs are relatively young and can reevaluate as they learn more. I would say the major exception is if you think your job is going to be automated soon. Then savings would be more valuable, though I suspect that even more valuable than savings would be working on being flexible and being able to pivot quickly. Another exception would be strong matching programs meaning the optimal amount of saving could be higher.
What should you do with the money that you are not putting into retirement? Some have suggested doing things on your bucket list, because you might not have an opportunity to later. However, if you think the outcome of TAI is likely to be positive, then it will generally become much easier and cheaper to do things on your bucket list after TAI. So then as EAs, I would recommend donating much of that extra money to the causes you care about while you can still make a difference, whether that is reducing existential risk, increasing animal welfare, reducing global poverty, or something else. If you have short-medium timelines, you might want to adjust your giving even within cause areas.
Love this Dylan and completely agree