I was mildly disappointed in the responses to my last question, so I did a bit of thinking and came up with some answers myself. I'm not super happy with them either, and would love feedback on them + variations, new suggestions, etc. The ideas are:
1. I approach someone who has longer timelines than me, and I say: I'll give you $X now if you promise to publicly admit I was right later and give me a platform, assuming you later come to update significantly in my direction.
2. I approach someone who has longer timelines than me, and I say: I'll give you $X now if you agree to talk with me about timelines for 2 hours. I'd like to hear your reasons, and to hear your responses to mine. The talk will be recorded. Then I get one coupon which I can redeem for another 2-hour conversation with you.
3. I approach someone who has longer timelines than me, and I say: For the next 5 years, you agree to put x% of your work-hours into projects of my choosing (perhaps with some constraints, like personal fit). Then, for the rest of my life, I'll put x% of my work-hours into projects of your choosing (constraints, etc.).
The problem with no. 3 is that it's probably too big an ask. Maybe it would work with someone who I already get along well with and can collaborate on projects, whose work I respect and who respects my work.
The point of no. 2 is to get them to update towards my position faster than they otherwise would have. This might happen in the first two-hour conversation, even. (They get the same benefits from me, plus cash, so it should be pretty appealing for sufficiently large X. Plus, I also benefit from the extra information which might help me update towards longer timelines after our talk!) The problem is that maybe forced 2-hour conversations don't actually succeed in that goal, depending on psychology/personality.
A variant of no 2 would simply be to challenge people to a public debate on the topic. Then the goal would be to get the audience to update.
The point of no. 1 is to get them to give me some of their status/credibility/platform, in the event that I turn out to be probably right. The problem, of course, is that it's up to them to decide whether I'm probably right, and it gives them an incentive to decide that I'm not!
I think I'd need to read more before we could have a very productive conversation. If you want to point me to some writing that you found most persuasive for short timelines (or you could write a post laying out your reasoning, if you haven't already; this could get more useful community discussion, too), that would be helpful. I don't want to commit to anything yet, though. I'm also not that well-read on AI safety in general.
I guess a few sources of skepticism I have now are:
Training an agent to be generally competent in interactions with humans and our systems (even virtually, and not just in conversation) could be too slow or require more complex simulated data than is feasible. Maybe a new version of GPT will be an AGI but not an agent and that might come soon, and while that could still be very impactful, it might not pose an existential risk. Animals as RL agents have had millions of years of evolution to have strong priors fitted to real world environments built into each individual.
I'm just skeptical about trying to extrapolate current trends to AGI.
On AI risk more generally, I'm skeptical that an AI could acquire and keep enough resources without the backing of people with access to them to be very dangerous. It would have to deceive us at least until it's too late for us to cut its access (and I haven't heard of such a scenario that wasn't far-fetched), e.g. by cutting the power or internet, which we can do physically, including by bombing. If we do catch it doing something dangerous, we will cut access. It would need access to powerful weapons to protect its access to resources or do much harm before we could cut its access to resources. This seems kind of obvious, though, so I imagine there are some responses from the AI safety community.