If investors with $1T thought AGI soon, and therefore tried to buy up a portfolio of semiconductor, cloud, and AI companies (a much more profitable and capital-efficient strategy than betting on real interest rates) they could only a buy a small fraction of those industries at current prices. There is a larger pool of investors who would sell at much higher than current prices, balancing that minority.Yes, it's weighted by capital and views on asset prices, but still a small portion of the relevant capital trying to trade (with risk and years in advance) on a thesis impacting many trillions of dollars of market cap aren't enough to drastically change asset prices against the counter trades of other investors.There is almost no discussion of AGI prospects by financial analysts, consultants, etc (generally if they mention it they just say they're not going to consider it). E.g. they don't report probabilities it would happen or make any estimates of the profits it would produce.Rohin is right that AGI by the 2030s is a contrarian view, and that there's likely less than $1T of investor capital that buys that view and selects investments based on it.
I, like many EAs, made a lot of money betting in prediction markets that Trump wouldn't overturn the 2020 election. The most informed investors had plenty of incentive to bet, and many did, but in the short term they were swamped by partisan 'dumb money.' The sane speculators have proportionally a bit more money to correct future mispricings after that event, but not much more. AI bets have done very well over the last decade but they're still not enough for the most informed people to become a large share of the relevant pricing views on these assets.
They still have not published. You can email Jan Brauner and Fabienne Sandkuehler for it.
Expected lives saved and taken are both infinite, yes.
That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn't change the ratio of your EV estimates for x-risk reduction and bednets, since the number appears on both sides of your equations. Here's a link to another essay making related points.
This sort of estimate is in general off by many orders of magnitude for thinking about the ratio of impact between different interventions when it only considers paths to very large numbers for the intervention under consideration, and not to reference interventions being compared against. For example, the expected number of lives saved from giving a bednet is infinite. Connecting to size-of-the-accessible-universe estimates, perhaps there are many simulations of situations like ours at an astronomical scale, and so our decisions will be replicated and have effects on astronomical scales.Any argument purporting to show <20 OOM in cost-effectiveness from astronomical waste considerations is almost always wrong for this kind of reason.
The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater. It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won't press a button with any chance of the end of the world, but you'll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.
This is much more of a problem (and an overwhelming one) for risks/opportunities that are microscopic compared to others. Baseline asteroid/comet risk is more like 1 in a billion. Much less opportunity for that with 1% or 10% risks.