I'm a generalist senior software engineer with an affection for math, psychology, economics and philosophy, hooked by 80000 Hours a few years ago and now pivoting my career of 20+ years towards higher and higher probabilities of doing something useful for society. (Of course even the higher probabilities are still very low compared to, say, one, but that seems to be the nature of the thing.)
I find AI Safety intellectually interesting, I'm happy to get involved.
I've been a GNU/Linux user for a long time, and I often can solve seemingly complex problems across the systems / backend / devops / cloud domains when things run on open source gears.
Nice article, thanks!
Maybe the closer we are to a singularity-like moment, the larger the deviations become in our expectations about the future. It would make sense, because at singularity our uncertainty about the future should be maximal, I think.
However, hopefully that's a bit off for now. And maybe (I really hope!) we can keep it at a distance.
I was thinking about the 50% success definition of time horizons, and the possible practical consequences of that.
One thing that may be still interesting is how long it takes the AI agent to do the task. Let's take, for example, a task that takes 4 hours for a human developer. Is it 10 minutes for an AI agent, or is it 8 hours? I got it that this is temporary anyway, and AI agents will be much faster pretty soon, but another question may be: when?
Another thing is what's happening on failure. If an AI agent tries a 4-hours-human-level-task, and fails in those other 50% (... 25%, or any less), what's next, when one wants do deliver something? Doing it by hand of a human and accepting the "wasted" time and cost? Restarting the AI agent n times or up to a cost limit? Companies hire engineers in the hope they'll have a very high success rate, and working in teams usually provides a multiplier on top of that. How does that scale with teams of AI agents?