This is the current Metaculus forecast for AGI:
Do decisions by EAs working in other cause areas mostly ignore this?
For example, are timelines for transformative AGI taken into account in work such as:
- Deciding whether to donate money later vs now
- Estimating existential risk from climate change, nuclear war and engineered pandemics past 2043 / 2068 / 2199
- Deciding whether to pursue "move fast, break things" approaches vs lower downside risk, slower, reform approaches
And additionally, should timelines for transformative AI be taken into account in this work?
Why is this necessarily a problem? We have faced a non-trivial amount of xrisk to make it alive to the year 2022, and it would have been nice if we could've relied less on luck. If there was a case to spend billions on preventing nuclear risk around 1950, ... (read more)