There's an important distinction here between prediction the next token in a piece of text and predicting the next action in a causal chain. If you have a computation that is represented by a causal graph, and you train a predictor to predict nodes conditional on previous nodes, then it's true that the predictor won't end up being able to do better than the original computational process. But text is not ordered that way! Texts often describe outcomes before describing the details of the events which generated them. If you train on texts like those, you get something more powerful than an imitator. If you train a good enough next-token predictor on chess games where the winner is mentioned before the list of moves, you can get superhuman play by prepending "This is a game which white/black wins:". If you train a good enough next-token predictor on texts that have the outputs of circuits listed before the inputs, you get an NP-oracle. You're almost certainly not going to get an NP-oracle from GPT-9, but that's because of the limitations of the training processes and architectures of that this universe can support, it's not a limitation of the loss function.
It will affect the trading and worse it will affect the trading inconsistently so we can't even use mathematics to subtract it
Nothing ever affects the trading consistently! It's never the case that in an important market you can just use math to decide what to bet.
The resulting percentage cannot be used as a measure of trustworthiness nor as a measure of the underlying probability of event X.
Sure it can. If you ever see a prediction market which you don't think is measuring the underlying probability of its event, you can make money from it (note that this is about manipulating whether the event will happen. Obviously if the market might be misresolved all bets are off). It's provable that, no matter what manipulation or other meta-dependencies exist, there's always some calibrated probability the market can settle at. If a manipulator has complete control whether an event will happen or not and will manipulate it to maximize its potential profit, the market settle at or fluctuate slightly around 50%, and in fact the event thus has a 50% chance of happening. If you give me any other manipulation scenarios, I can similarly show how the market will settle at a calibrated probability. Manipulation is bad because it creates bad incentives outside the market, and because it usually increases the entropy of events (bringing probabilities closer to 50%; this will always be the case if manipulation in either direction is equally cheap), but I don't think it can threaten the calibration of a market.
But you can manipulate prediction markets in much more easy, mundane and legal ways.
I think my point generalizes. There's a bunch of ways to manipulate stock prices. I assume they cause some problems, but use laws and norms to prevent the worst behavior and it ends up working pretty well. Prediction markets may face more of a problem, since I'd expect them to be easier to manipulate, but I don't think there's a qualitative difference.
Suffice to say, if you combine the fact that 1) humans can't instantly know all the new information with 2) the fact we can't know whether the market has updated because of information that we already know, new information, or people updating on the assumption that there is new information, with 3) recursive mindgames and 4) these constant 'ripples' of shifting uncertainties; you'll get so much asynchronous noise that the prediction market becomes an unreliable source.
Sometimes reality is deeply unpredictable, and in those cases prediction markets won't help. But if you think that a prediction market will be unreliable in cases where any other method is reliable, you can use that to get rich.I think the core of what I'm trying to get across is that (modulo transaction costs), a prediction market is as reliable as any other method, and if it's not you can correct it and/or get rich. Manipulation is bad because it changes the probability that the event happens, not because it makes prediction markets unreliable. Manipulation can make all methods of prediction work less well, it cannot make prediction markets work less well than another method.
You don't need perfect trust, everything is probabilistic. If I can trust someone with probability greater than 90% (and there are many people for which this is the case), then my worrying about manipulation won't affect my trading much. Similarly, I'm pretty sure that there are enough people who trust me to not manipulate markets that this isn't an obstacle in getting good predictions.
I agree that if prediction markets become huge, manipulation becomes much more of a problem. Still, the stock market doesn't seem to be creating too much incentive to assassinate CEOs, so I doubt that this will prevent prediction markets from becoming very useful (pretty sure Robin Hanson makes this point in more detail somewhere, but I can't seem to find where).
I'm confused by your last paragraph. You can be calibrated without perfect information or logical omniscience. Calibration just means that markets at 60% resolve YES 60% of the time, and manipulation won't change this. If prediction markets are consistently miscalibrated, than anyone can make consistent money by correcting percentages (if markets at 60% resolve YES 80% of the time, than you can make money by betting up all the 60% markets to 80%, without having to worry about any detail of how the markets resolve).
There's a ton of ways to manipulate a prediction market about yourself if the market isn't expecting you to (e.g. say publicly you won't do X and then bet on and do X; make a small bet against X then a large bet for X after the market shifts; wait until right before market closing to do X), and I don't think this one is particularly bad.
I'm not quite sure what it would mean to "solve" this. Ultimately, I expect markets will stay calibrated as investors account for these possibilities, albeit the market prices will be less informative. For markets I create about myself, I try to combat this by explicitly promising in the market description to not trade on the market, or to not manipulate the market.
I'd find this post much more valuable if it argued that some parts of the EA community were bad, rather than arguing that they're cultish. Cultish is an imperfect proxy for badness. Sure, cults are bad and something being a thing which cults do is weak evidence of its badness (see Reversed Stupidity Is Not Intelligence). Is, say, advertising EA too aggressively bad? Probably! But it is bad for specific reasons, not because it is also a thing cults do.
A particular way cultishness could be bad, which would make it directly bad for EA to be cultish, is if cults are an attractor in the space of organizations. This would mean that organizations with some properties of cults would feel pressure to gain more and more properties of cults. Still, I currently don't think is the case, and so I think direct criticisms are much more valuable than insinuations of cultishness.
I do think 'catastrophic suffering risk' is an odd one, because it's really not intuitive that a 'catastrophic suffering risk' is less bad than a 'suffering risk'. I guess I just find it weird that something as bad as a genuine s-risk has such a pedestrian name, compared to 'existential risk', which I think is an intuitive and evocative name that gets across the level of bad-ness pretty well.
I think what happens in my head is that 's-risk' denotes a similarity to x-risks while 'catastrophic suffering risk' denotes a similarity to catastrophic risks, making the former feel more severe than the latter, but I agree this is odd.
One quick question - when you say an s-risk creates a future with negative value, does that make it worse than an x-risk? As in, the imagined future is SO awful that the extinction of humanity would be preferable?
Yep, for me that feels like a natural place to put the bar for an s-risk.
Really great post! As a person who subscribes to hedonistic utilitarianism (modulo a decent amount of moral uncertainty), this is the most compelling criticism I've come across.
I do want to assuage a few of your worries, though. Firstly, as Richard brought up, I respect normative uncertainty enough to at least think for a long time before turning the universe into any sort of hedonium. Furthermore, I see myself as being on a joint mission with all other longtermists to bring about a glorious future, and I wouldn't defect against them by bringing about a future they would reject, even if I was completely certain in my morality.
Also, my current best-guess vision of the maximum-hedonium future doesn't look like what you've described. I agree that it will probably not look like a bunch of simulated happy people living in a way anything like people on Earth. But hedonistic utilitarianism (as I interpret it) doesn't say that "pleasure", in the way the world is commonly used, is what matters, but rather that mental states are what matter. The highest utility states are probably not base pleasures, but rich experiences. I expect that the hedonistic utility a mind can experience scales superlinearly in the computational resources used by that mind. As such, the utopia I imagine is not a bunch of isolated minds stuck repeating some base pleasure, but interconnected minds growing and becoming capable of immensely rich experiences. Nick Bostrom's Letter from Utopia is the best description of this vision that I'm aware of.
Possibly this still sounds terrible to you, and that's entirely fair. Hedonistic utilitarianism does in fact entail many weird conclusions in the limit.
It's a perfectly good question! I've done research focused on reducing s-risks, and I still don't have a perfectly clear definition for them.
I generally use the term for suffering that occurs on an astronomical scale and is enough to make the value of the future negative. So for the alien factory farming, I'd probably call it an s-risk once the suffering of the aliens outweighs the positive value from other future beings. If it was significant, but didn't rise to that level, I'd call it something like 'catastrophic suffering risk'. 'Astronomical waste' is also a term that works, though I usually use that for positive things we fail to do, rather than negative things we do.
Overall, I wouldn't worry too much. There isn't standard terminology for 'undefined amount of suffering that deserves consideration', and you should be fine using whatever terms seem best to you as long as you're clear what you mean by them. The demarcation between existential and merely catastrophic risks is important, because there is a sharp discontinuity once a risk becomes so severe that we can never recover from it. There isn't anything like that with s-risks; a risk that falls just under the bar for being an s-risk should be treated the same as a risk that just passes it.
I hope that answered your question! I'd be happy to clarify if any of that was unclear, or if you have further questions.
"no other literal X-risk" seems too strong. There are certainly some potential ways that nuclear war or a bioweapon could cause human extinction. They're not just catastrophic risks.
In addition, catastrophic risks don't just involve massive immediate suffering. They drastically change global circumstances in a way which will have knock-on effects on whether, when, and how we build AGI.
All that said, I directionally agree with you, and I think that probably all longtermists should have a model of the effects their work has on the potentiality of aligned AGI, and that they should seriously consider switching to working more directly on AI, even if their competencies appear to lie elsewhere. I just think that your post takes this point too far.
I think this is a bit too strong of a claim. It is true that that overwhelming majority of value in the future is determined by whether, when, and how we build AGI. I think it is also true that a longtermist trying to maximize impact should, in some sense, be doing something which affects whether, when, or how we build AGI.
However, I think your post is too dismissive of working on other existential risks. Reducing the chance that we all die before building AGI increases the chance that we build AGI. While there probably won't be a nuclear war before AGI, it is quite possible that a person very well-suited to working on reducing nuclear issues could reduce x-risk more by working to reduce nuclear x-risk than they could by working more directly on AI.