kokotajlod

Comments

Delegate a forecast

Oh, and to answer your question for why it's more likely shorter than later: Progress right now seems to be driven by compute, and in particular by buying greater and greater quantities of it. In a few years this trend MUST stop, because not even the US government would have enough money to continue the trend of spending an order of magnitude+ more each year. So if we haven't got to crazy AI by 2026 or so, the current paradigm of "just add more compute" will no longer be so viable, and we're back to waiting for new ideas to come along.

Delegate a forecast

I have a spreadsheet of different models and what timelines they imply, and how much weight I put on each model. The result is 18% by end of 2026. Then I consider various sources of evidence and update upwards to 38% by end of 2026. I think if it doesn't happen by 2026 or so it'll probably take a while longer, so my median is on 2040 or so.

The most highly weighted model in my spreadsheet takes compute to be the main driver of progress and uses a flat distribution over orders of magnitude of compute. Since it's implausible that the flat distribution should extend more than 18 or so OOMs from where we are now, and since we are going to get 3-5 more OOM in the next five years, that yields 20%.

The biggest upward update from the bits of evidence comes from the trends embodied in transformers (e.g. GPT-3) and also to some extent in alphago, alphazero, muzero: Strip out all that human knowledge and specialized architecture, just make a fairly simple neural net and make it huge, and it does better and better the bigger you make it.

Another big update upward is... well, just read this comment. To me, this comment did not give me a new picture of what was going on but rather confirmed the picture I already had. The fact that it is so highly upvoted and so little objected to suggests that the same goes for lots of people in the community. Now there's common knowledge.

Delegate a forecast

Thanks! It's about what I expected, I guess, but different from my own view (I've got more weight on much shorter timelines). It's encouraging to hear though!

Delegate a forecast

Thanks! Yes it is. All I had been doing was looking at that passport backlog, but I hadn't made a model based on it. It's discouraging to see so much probability mass on December, but not too surprising...

Delegate a forecast

What is the probability that my baby daughter's US passport application will be rejected on account of inadequate photo?

Evidence: The photo looked acceptable to me but my wife, who thought a lot more about it, judged it to be overexposed. It wasn't quite as bad as the examples of overexposure given on the website, but in her opinion it was too close for comfort.

Evidence: The lady at the post office said the photo was fine, but she was rude to us and in a hurry. For example, she stapled it to our application and hustled us through the rest of the process and we were too shy and indecisive to stop her.

Delegate a forecast

When will my daughter's passport arrive? (We are US citizens, applied by mail two weeks ago, application received last week)

Delegate a forecast

When will there be an AI that can play random computer games from some very large and diverse set (say, a representative sample of Steam) that didn't appear in its training data and do about as well as an casual human player trying the game for the first time?

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

OK, thanks. Not sure I can pull it off, that was just a toy example. Probably even my best arguments would have a smaller impact than a factor of three, at least when averaged across the whole community.

I agree with your explanation of the ways this would improve things... I guess I'm just concerned about opportunity costs.

Like, it seems to me that a tripling of credence in Sudden Emergence shouldn't change what people do by more than, say, 10%. When you factor in tractability, neglectedness, personal fit, doing things that are beneficial under both Sudden Emergence and non-Sudden Emergence, etc. a factor of 3 in the probability of sudden emergence probably won't change the bottom line for what 90% of people should be doing with their time. For example, I'm currently working on acausal trade stuff, and I think that if my credence in sudden emergence decreased by a factor of 3 I'd still keep doing what I'm doing.

Meanwhile, I could be working on AI safety directly, or I could be working on acausal trade stuff (which I think could plausibly lead to a more than 10% improvement in EA effort allocation. Or at least, more plausibly than working on Sudden Emergence, it seems to me right now).

I'm very uncertain about all this, of course.

Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post

Thanks, I'll update the text when I get access to Metaculus again (I've blocked myself from it for productivity reasons lol)

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

You say that there hasn't been much literature arguing for Sudden Emergence (the claim that AI progress will look more like the brain-in-a-box scenario than the gradual-distributed-progress scenario). I am interested in writing some things on the topic myself, but currently think it isn't decision-relevant enough to be worth prioritizing. Can you say more about the decision-relevance of this debate?

Toy example: Suppose I write something that triples everyone's credence in Sudden Emergence. How does that change what people do, in a way that makes the world better (or worse, depending on whether Sudden Emergence is true or not!)

Load More