tobycrisford

142Joined Oct 2018

Comments
39

Apologies, I misunderstood a fundamental aspect of what you're doing! For some reason in my head you'd picked a set of conjectures which had just been posited this year, and were seeing how Laplace's rule of succession would perform when using it to extrapolate forward with no historical input.

I don't know where I got this wrong impression from, because you state very clearly what you're doing in the first sentence of your post. I should have read it more carefully before making the bold claims in my last comment. I actually even had a go at stating the terms of the bet I suggested before quickly realising what I'd missed and retracting. But if you want to hold me to it you can (I might be interpreting the forum wrong but I think you can still see the deleted comment?)

I'm not embarrassed by my original concern about the dimensions, but your original reply addressed them nicely and I can see it likely doesn't make a huge difference here whether you take a year or a month, at least as long as the conjecture was posited a good number of years ago (in the limit that "trial period"/"time since posited" goes to zero, you presumably recover the timeless result you referenced).

New EA forum suggestion: you should be able to disagree with your own comments.

Edit: This comment is wrong and I'm now very embarrassed by it. It was based on a misunderstanding of what the NunoSempere is doing that would have been resolved by a more careful read of the first sentence of the forum post!

Thank you for the link to the timeless version, that is nice! 

But I don't agree with your argument that this issue is moot in practice. I think you should repeat your R analysis with months instead of years, and see how your predicted percentiles change. I predict they will all be precisely 12 times smaller (willing to bet a small amount on this).

This follows from dimensional analysis. How does the R script know what a year is? Only because you picked a year as your trial. If you repeat your analysis using a month as a trial attempt, your predicted mean proof time will then be X months instead of X years (i.e. 12 times smaller).

The same goes for any other dimensionful quantity you've computed, like the percentiles.

You could try to apply the linked timeless version instead, although I think you'd find you run into insurmountable  regularization problems around t=0, for exactly the same reasons. You can't get something dimensionful out of something dimensionless. The analysis doesn't know what a second is. The timeless version works when applied retrospectively, but it won't work predicting forward from scratch like you're trying to do here, unless you use some kind of prior to set a time-scale.

I'm confused about the methodology here. Laplace's law of succession seems dimensionless. How do you get something with units of 'years' out of it? Couldn't you just as easily have looked at the probability of the conjecture being proven on a given day, or month, or martian year, and come up with a different distribution?

I'm also confused about what this experiment will tell us about the utility of Laplace's law outside of the realm of mathematical conjectures. If you used the same logic to estimate human life expectancy, for example, it would clearly be very wrong. If Laplace's rule has a hope of being useful, it seems it would only be after taking some kind of average performance over a variety of different domains. I don't think its usefulness in one particular domain should tell us very much.

Thanks for the comment! I have quite a few thoughts on that:

First, the intention of this post was to criticize strong longtermism by showing that it has some seemingly ridiculous implications. So in that sense, I completely agree that the sentence you picked out has some weird edge cases. That's exactly the claim I wanted to make! I also want to claim that you can't reject these weird edge cases without also rejecting the core logic of strong longtermism that tells us to give enormous priority to longterm considerations.

The second thing to say though is that I wanted to exclude infinite value cases from the discussion, and I think both of your examples probably come under that. The reason for this is not that infinite value cases are not also problematic for strong longtermism (they really are!) but strong longtermists have already adapted their point of view in light of this. In Nick Beckstead's thesis, he says that in infinite value cases, the usual expected utility maximization framework should not apply. That's fair enough. If I want to criticize strong longtermists, I should criticize what they actually believe, not a strawman, so I stuck to examples containing very large (but finite) value in this post.

The third and final thought I have is a specific comment on your quantum multiverse case. If we'd make any possible decision in any branch, does that really mean that none of our decisions have any relevance? This seems like a fundamentally different type of argument to the Pascal's wager-type arguments that this post relates to, in that I think this objection would apply to any decision framework, not just EV maximization. If you're going to make all the decisions anyway, why does any decision matter? But you still might make the right decision on more branches than you make the wrong decision, and so my feeling is that this objection has no more force than the objection that in a deterministic universe, none of our decisions have relevance because the outcome is pre-determined. I don't think determinism should be problematic for decision theory, so I don't think the many-worlds interpretation of quantum mechanics should be either.

Thanks! Very related. Is there somewhere in the comments that describes precisely the same issue? If so I'll link it in the text.

I tried to describe some possible examples in the post. Maybe strong longtermists should have less trust in scientific consensus, since they should act as if the scientific consensus is wrong on some fundamental issues (e.g. on the 2nd law of thermodynamics, faster than light travel prohibition). Although I think you could make a good argument that this goes too far.

I think the example about humanity's ability to coordinate might be more decision-relevant. If you need to act as if humanity will be able to overcome global challenges and spread through the galaxy, given the chance, then I think that is going to have relevance for the prioritisation of different existential risks. You will overestimate humanity's ability to coordinate relative to if you didn't make that conditioning, and that might lead you to, say, be less worried about climate change.

I agree that it makes this post much less convincing that I can't describe a clear cut example though. Possibly that's a reason to not be as worried about this issue. But to me, the fact that "allows for a strong future" should almost always dominate "probably true" as a principle for choosing between beliefs to adopt, intuitively feels like it must be decision-relevant.

This seems like an odd post to me.  Your headline argument is that you think SBF made an honest mistake, rather than wilfully misusing his users' funds, and most commenters seem to be reacting to that claim. The claim seems likely wrong to me, but if you honestly believe it then I'm glad you're sharing it and that it's getting discussed.

But in your third point (and maybe your second?) you seem to be defending the idea that even if SBF wilfully misused funds, then that's still ok. It was a bad bet, but we should celebrate people who take risky, but positive EV, gambles, even if they strongly violate ethical norms. Is that a fair summary of what you believe, or am I misreading/misunderstanding? If it is, I think this post is very bad and it seems very worrying that it's currently got +ve karma.

I am very confident that the arguments do perfectly cancel out in the sky-colour case. There is nothing philosophically confusing about the sky-colour case, it's just an application of conditional probability.

That doesn't mean we can never learn anything. It just means that if X and Y are independent after controlling for a third variable Z, then learning X can give you no additional information about Y if you already know Z. That's true in general. Here X is the colour of the sky, Y is the probability of a catastrophic event occurring, and Z is the number of times the catastrophic event has occurred in the past.
 

---

In the Russian roulette example, you can only exist if the gun doesn't fire, but you can still use your existence to conclude that it is more likely that the gun won't fire (i.e. that you picked up the safer gun). The same should be true in anthropic shadow, at least in the one world case.

Fine tuning is helpful to think about here too. Fine tuning can be explained anthropically, but only if a large number of worlds actually exist. If there was only one solar system, with only one planet, then the fine tuning of conditions on that planet for life would be surprising. Saying that we couldn't have existed otherwise does not explain it away (at least in my opinion, for reasons I tried to justify in the 'possible solution #1' section).

In analogy with the anthropic explanation of fine-tuning, anthropic shadow might come back if there are many observer-containing worlds. You learn less from your existence in that case, so there's not necessarily a neat cancellation of the two arguments. But I explored that potential justification for anthropic shadow in the second section, and couldn't make that work either.

I'd like to spend more time digesting this properly, but the statistics in this paragraph seem particularly shocking to me:

"For instance, Hickel et al. (2022) calculate that, each year, the Global North extracts from the South enough money to end extreme poverty 70x over. The monetary value extracted from the Global South from 1990 to 2015 - in terms of embodied labour value and material resources - outstripped aid given to the Global South by a factor of 30. "

They also seem hard to reconcile with each other. If the global north extracts every year 70 times what it takes to end extreme poverty (for one year or forever?), and from 1995-2015 the extracted value per year was only 30 times bigger than the aid given per year, then doesn't it follow that the global north is already giving in aid more than double what is needed to end extreme poverty (either at a per year rate or each year it gives double what is needed to end poverty for good)? What am I missing?

It can't be that the figure is 'what it would take to end extreme poverty with no extraction', because that figure would just be zero under this argument wouldn't it?

Load More