Peter Wildeford's Shortform

3 comments, sorted by Highlighting new comments since Today at 2:45 PM
New Comment

If we are taking Transformative AI (TAI) to be creating a transformation at the scale of the industrial revolution ... has anyone thought about what "aligning" the actual 1760-1820 industrial revolution might've looked like or what it could've meant for someone living in 1720 to work to ensure that the 1760-1820 industrial revolution was beneficial instead of harmful to humanity?

I guess the analogy might break down though given that the industrial revolution was still well within human control but TAI might easily not be, or that TAI might involve more discrete/fast/discontinuous takeoffs whereas the industrial revolution was rather slow/continuous, or at least slow/continuous enough that we'd expect humans born in 1740 to reasonably adapt to the new change in progress without being too bewildered.

This is similar to, but I think still a bit distinct from, asking the question of "what would a longtermist EA in the 1600s have done?" ...A question I still think is interesting but many EAs I know are not all that interested, probably because our time periods are just too disanalogous.

To be honest, intuiting what a human being in the 1600s would have thought about anything seems like a non-trivial endeavour. I find it hard to imagine myself without the current math background I have. Probability was just invented, calculus was just invented. Newton had just given the world a realist mechanical way of viewing the world, except idk how many people thought in those terms because philosophical background was lacking too. Nietzche, Hume, Wittgenstein, none of them existed.

One trends that may nevertheless have been foreseeable would be the sudden tremendous importance of scientists and science - in both understanding and reshaping how the world works. And general importance of high-level abstractions, rather than just practical engineering knowledge that existed at the time. People knew architecture and geometry but idk how many people realised the general-purpose theorems of geometry are actually useful - and not just what helps you build building #48. Today we take it as matter of fact that theorems are done with symbols not specifics, all useful reasoning is symbolic and often at a high level of abstraction. Idk if people (even scientists) had such clear intuition then.

Some people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.