CarlShulman

4265Joined Aug 2014

Comments
345

https://www.openphilanthropy.org/research/three-key-issues-ive-changed-my-mind-about/

  1. But the stocks are the more profitable and capital-efficient investment, so that's  where you see effects first on market prices (if much at all) for a given number of traders buying the investment thesis. That's the main investment on this basis I see short timelines believers making (including me), and has in fact yielded a lot of excess returns since EAs started to identify it in the 2010s.
  2. I don't think anyone here is arguing against the no-trade theorem, and that's not an argument that prices will never be swayed  by anything, but that you can have a sizable amount of money invested on the AGI thesis before it sways prices.  Yes, price changes don't need to  be driven by volume if no one wants to trade against. But plenty of traders not buying AGI would trade against AGI-driven valuations, e.g. against the high P/E ratios that would ensue. Rohin is saying not that the majority of investment capital that doesn't buy AGI will sit on the sidelines but will trade against the AGI-driven bet, e.g. by selling assets at elevated P/E ratios. At the moment there is enough $ trading against AGI bets that market prices are not in line with the AGI bet valuations. I recognize that means the outside view EMH heuristic of going with the side trading more $ favors no AGI, but I think based on the object level that the contrarian view here is right.
  3. It's just a simple illustration that you can have correct minorities that have not yet been able to grow by profit  or imitation to correct prices. And the election mispricings also occurred in uncapped crypto prediction markets (although the hassle of executing very quickly there surely deterred or delayed institutional investors), which is how some made hundreds  of thousands or millions of dollars there.

     

If investors with $1T thought AGI soon, and therefore tried to buy up a portfolio of semiconductor, cloud, and AI companies (a much more profitable and capital-efficient strategy than betting on real interest rates) they could only a buy a small fraction of those industries at current prices. There is a larger pool of investors who would sell at much higher than current prices, balancing that minority.

Yes, it's weighted by capital and views on asset prices, but still a small portion of the relevant capital trying to trade (with risk and years in advance) on a thesis impacting many trillions of dollars of market cap aren't enough to drastically change asset prices against the counter trades of other investors.

There is almost no discussion of AGI prospects by financial analysts, consultants, etc (generally if they mention it they just say they're not going to consider it). E.g.  they don't report probabilities it would happen or make any  estimates of the profits it would produce.

Rohin is right that AGI by the 2030s is a contrarian view, and that there's likely less than $1T of investor capital that buys that view and selects investments based on it.

I, like many EAs, made a lot of money betting in prediction markets that Trump wouldn't overturn the 2020 election. The most informed investors had plenty of incentive to bet, and many did, but in the short term they  were swamped by partisan 'dumb money.' The sane speculators have proportionally a bit more money to correct future mispricings after that event, but not much more. AI bets have done very well over the last decade but they're still not enough for the most informed people to become a large share of the relevant pricing views on these assets.



 

They still have not published. You can email Jan Brauner and Fabienne Sandkuehler for it.

That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act  the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn't change the ratio of your EV estimates for  x-risk reduction and bednets, since the number appears on both sides of your equations. Here's a link to another essay making related points.

This sort of estimate is in general off by many orders of magnitude for thinking about the ratio of impact between different interventions when it only considers paths to very large numbers for the intervention under consideration, and not to reference interventions being compared against. For example, the expected number of lives saved from giving a bednet is infinite. Connecting  to size-of-the-accessible-universe estimates, perhaps there are many simulations of situations like ours at an astronomical scale, and so our decisions will be replicated and have effects on astronomical scales.

Any argument purporting to show <20 OOM in cost-effectiveness from astronomical waste considerations is almost always  wrong for this kind of reason.

The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater.  

It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won't press a button with any chance of  the end of the world, but you'll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.
 
 

This is much more of a problem (and an overwhelming one) for risks/opportunities  that are microscopic compared to others. Baseline asteroid/comet risk is more like 1 in a billion. Much less opportunity for that with 1% or 10% risks.

Load More