I was mildly disappointed in the responses to my last question, so I did a bit of thinking and came up with some answers myself. I'm not super happy with them either, and would love feedback on them + variations, new suggestions, etc. The ideas are:
1. I approach someone who has longer timelines than me, and I say: I'll give you $X now if you promise to publicly admit I was right later and give me a platform, assuming you later come to update significantly in my direction.
2. I approach someone who has longer timelines than me, and I say: I'll give you $X now if you agree to talk with me about timelines for 2 hours. I'd like to hear your reasons, and to hear your responses to mine. The talk will be recorded. Then I get one coupon which I can redeem for another 2-hour conversation with you.
3. I approach someone who has longer timelines than me, and I say: For the next 5 years, you agree to put x% of your work-hours into projects of my choosing (perhaps with some constraints, like personal fit). Then, for the rest of my life, I'll put x% of my work-hours into projects of your choosing (constraints, etc.).
The problem with no. 3 is that it's probably too big an ask. Maybe it would work with someone who I already get along well with and can collaborate on projects, whose work I respect and who respects my work.
The point of no. 2 is to get them to update towards my position faster than they otherwise would have. This might happen in the first two-hour conversation, even. (They get the same benefits from me, plus cash, so it should be pretty appealing for sufficiently large X. Plus, I also benefit from the extra information which might help me update towards longer timelines after our talk!) The problem is that maybe forced 2-hour conversations don't actually succeed in that goal, depending on psychology/personality.
A variant of no 2 would simply be to challenge people to a public debate on the topic. Then the goal would be to get the audience to update.
The point of no. 1 is to get them to give me some of their status/credibility/platform, in the event that I turn out to be probably right. The problem, of course, is that it's up to them to decide whether I'm probably right, and it gives them an incentive to decide that I'm not!
Thanks, this is helpful! I'm in the middle of writing some posts laying out my reasoning... but it looks like it'll take a few more weeks at least, given how long it's taken so far.
Funnily enough, all three of the sources of skepticism you mention are things that I happen to have written things about or else am in the process of writing something about. This is probably a coincidence. Here are my answers to 1, 2, and 3, or more like teasers of answers:
1. I agree, it could. But it also could not. I think a non-agent AGI would also be a big deal; in fact I think there are multiple potential AI-induced points of no return. (For example, a non-agent AGI could be retrained to be an agent, or could be a component of a larger agenty system, or could be used to research agenty systems faster, or could create a vulnerable world that ends quickly or goes insane.) I'm also working on a post arguing that the millions of years of evolution don't mean shit and that while humans aren't blank slates they might as well be for purposes of AI forecasting. :)
2. My model for predicting AI timelines (which I am working on a post for) is similar to Ajeya's. I don't think it's fair to describe it as an extrapolation of current trends; rather, it constructs a reasonable prior over how much compute should be needed to get to AGI, and then we update on the fact that the amount of compute we have so far hasn't been enough, and make our timelines by projecting how the price of compute will drop. (So yeah, we are extrapolating compute price trends, but those seem fairly solid to extrapolate, given the many decades across which they've held fairly steady, and given that we only need to extrapolate them for a few more years to get a non-trivial probability.)
3. Yes, this is something that's been discussed at length. There are lots of ways things could go wrong. For example, the people who build AGI will be thinking that they can use it for something, otherwise they wouldn't have built it. By default it will be out in the world doing things; if we want it to be locked in a box under study (for a long period of time that it can't just wait patiently through), we need to do lots of AI risk awareness-raising. Alternatively, AI might be good enough at persuasion to convince some of the relevant people that it is trustworthy when it isn't. This is probably easier than it sounds, given how much popular media is suffused with "But humans are actually the bad guys, keeping sentient robots as slaves!" memes. (Also because there probably will be more than one team of people and one AI; it could be dozens of AIs talking to thousands or millions of people each. With competitive pressure to give them looser and looser restrictions so they can go faster and make more money or whatever.) As for whether we'd shut it off after we catch it doing dangerous things -- well, it wouldn't do them if it thought we'd notice and shut it off. This effectively limits what it can do to further its goals, but not enough, I think.