This is the current Metaculus forecast for AGI:
Do decisions by EAs working in other cause areas mostly ignore this?
For example, are timelines for transformative AGI taken into account in work such as:
- Deciding whether to donate money later vs now
- Estimating existential risk from climate change, nuclear war and engineered pandemics past 2043 / 2068 / 2199
- Deciding whether to pursue "move fast, break things" approaches vs lower downside risk, slower, reform approaches
And additionally, should timelines for transformative AI be taken into account in this work?