To make things more specific:
Lot of money = $1B+; lot of power = CEO of $10B+ org; lot of influence = 1M+ followers, or an advisor to someone with a lot of money or power.
AI timelines = time until an AI-mediated existential catastrophe
Very short = ≥ 10% chance of it happening in ≤ 2 years.
Please don’t use this space to argue that AI x-risk isn’t possible/likely, or that timelines aren’t that short. There are plenty of other places to do that. I want to know what you would do conditional on being in this scenario, not whether you think the scenario is likely.
Imagine it's just the standard AGI scenario where the world ends "by accident", i.e. the people making the AI don't heed the standard risks, or solve the Control Problem, as outlined in books like Human Compatible and Superintelligence, in a bid to be first to make AGI (perhaps for economic incentives, or perhaps for your ** scenario). I imagine it will also be hard to know who exactly the actors are, but you could have some ideas (e.g. the leading AI companies, certain governments etc).