Is this a good way to bet on short timelines?

OK. Good to hear. I'm surprised to hear that you think my beliefs are sufficiently different from yours. I thought your timelines views are very similar to Ajeya's; well, so are mine! (Also, I've formed my current views mostly in the last 6 months. Had you asked me a year or two ago, I probably would have said something like median 20-25 years from now, which is pretty close to your median I think. This is evidence, I think, that I could change my mind back.)

Anyhow, I won't take up any more of your time... for now! Bwahaha!  :)

Is this a good way to bet on short timelines?

Thanks, this is helpful! I'm in the middle of writing some posts laying out my reasoning... but it looks like it'll take a few more weeks at least, given how long it's taken so far.

Funnily enough, all three of the sources of skepticism you mention are things that I happen to have written things about or else am in the process of writing something about. This is probably a coincidence. Here are my answers to 1, 2, and 3, or more like teasers of answers:

1. I agree, it could. But it also could not. I think a non-agent AGI would also be a big deal; in fact I think there are multiple potential AI-induced points of no return. (For example, a non-agent AGI could be retrained to be an agent, or could be a component of a larger agenty system, or could be used to research agenty systems faster, or could create a vulnerable world that ends quickly or goes insane.) I'm also working on a post arguing that the millions of years of evolution don't mean shit and that while humans aren't blank slates they might as well be for purposes of AI forecasting. :)

2. My model for predicting AI timelines (which I am working on a post for) is similar to Ajeya's. I don't think it's fair to describe it as an extrapolation of current trends; rather, it constructs a reasonable prior over how much compute should be needed to get to AGI, and then we update on the fact that the amount of compute we have so far hasn't been enough, and make our timelines by projecting how the price of compute will drop. (So yeah, we are extrapolating compute price trends, but those seem fairly solid to extrapolate, given the many decades across which they've held fairly steady, and given that we only need to extrapolate them for a few more years to get a non-trivial probability.)

3. Yes, this is something that's been discussed at length. There are lots of ways things could go wrong. For example, the people who build AGI will be thinking that they can use it for something, otherwise they wouldn't have built it. By default it will be out in the world doing things; if we want it to be locked in a box under study (for a long period of time that it can't just wait patiently through), we need to do lots of AI risk awareness-raising. Alternatively, AI might be good enough at persuasion to convince some of the relevant people that it is trustworthy when it isn't. This is probably easier than it sounds, given how much popular media is suffused with "But humans are actually the bad guys, keeping sentient robots as slaves!" memes. (Also because there probably will be more than one team of people and one AI; it could be dozens of AIs talking to thousands or millions of people each. With competitive pressure to give them looser and looser restrictions so they can go faster and make more money or whatever.) As for whether we'd shut it off after we catch it doing dangerous things -- well, it wouldn't do them if it thought we'd notice and shut it off. This effectively limits what it can do to further its goals, but not enough, I think. 

Is this a good way to bet on short timelines?

Sorry for the delayed reply. I'm primarily interested in making these trades with people who have a similar worldview to me, because this increases the chance that as a result of the trade they will start working on the things I think are most valuable. I'd be happy to talk with other people too, except that if there's so much inferential distance to cross it would be more for fun than for impact. That said, maybe I'm modelling this wrong.

Yes, for no. 3 I meant after the first 5 years. Good catch.

It sounds like you might be a good fit for this sort of thing! Want to have a call to chat sometime? I'm also interested in doing no. 2 with you...


Is this a good way to bet on short timelines?

OK, thanks. FWIW I expect at least one of us to update at least slightly. Perhaps it'll be me.  I'd be interested to know why you disagree--do I come across as stubborn or hedgehoggy? If so, please don't hesitate to say so, I would be grateful to hear that. 

I might be willing to pay $4,000, especially if I could think of it as part of my donation for the year. What would you do with the money--donate it? As for time, sure, happy to wait a few months.


Is this a good way to bet on short timelines?

Thanks! Yeah, your criticism of no. 3 is correct.  As for no. 1, yeah, probably this works best for bets with people who I don't think would do this correctly absent a bet, but who would do it correctly with a bet... which is perhaps a narrow band of people! 

How high would you need for no. 2? I might do it anyway, just for the information value. :) My views on timelines haven't yet been shaped by much direct conversation with people like yourself.

Persuasion Tools: AI takeover without AGI or agency?

I'm not optimistic. When will more reasonable voices with different biases enter social media? Almost the whole world is already on social media. 

Can we convince people to work on AI safety without convincing them about AGI happening this century?

That said, I think I have a good shot of convincing people that there's a significant chance of AGI this century.

Can we convince people to work on AI safety without convincing them about AGI happening this century?

If I were to try to convince someone to work on AI safety without convincing them that AGI will happen this century, I'd say things like:

  1. While it may not happen this century, it might.
  2. While it may not happen this century, it'll probably happen eventually.
  3. It's extremely important; it's an x-risk.
  4. We are currently woefully underprepared for it.
  5. It's going to take a lot of research and policy work to plan for it, work which won't be done by default.
  6. Currently very few people are doing this work (e.g. there's more academic papers published on dung beetles than human extinction, AI risk is even more niche, etc. etc. (I may be remembering the example wrong))
  7. There are other big problems, like climate change, nuclear war, etc. but these are both less likely to cause x-risk and also much less neglected.
Persuasion Tools: AI takeover without AGI or agency?

Maybe, I don't know. I have heard people say that the printing press helped cause the religious wars that tore apart Europe; it probably helped cause the American revolution too, which may be a bad thing. As for radio, I've heard people say it contributed to the rise of fascism and communism, helped the genocide in darfur, etc. Of course, maybe these things had good effects that outweighed their bad effects--I have no idea really. 

I think my overall concern is that I don't think the slow process of cultural debate is overall truth oriented. I think science seems to be overall truth-oriented, and of course the stock market and business world is overall truth-oriented, and maybe on some level military strategy is overall truth-oriented. And sports betting is overall truth-oriented. But religion and politics don't seem to be.

What quotes do you find most inspire you to use your resources (effectively) to help others?

Yeah, though it's of course heavily inspired by things people say on LessWrong. Thanks! It was one of my wedding vows.

Load More