Lukas_Finnveden

759Joined Aug 2018

Comments
116

Topic Contributions
1

On the other hand, the critic updated me towards higher numbers on p(nuke london|any nuke). Though I assume Samotsvety have already read it, so not sure how to take that into account. But given that uncertainty, given that that number only comes into play in confusing worlds where everyone's models are broken, and given Samotsvety's 5x higher unconditional number, I will update at least a bit in that direction.

Thanks for the links! (Fyi the first two points to the same page.)

The critic's 0.3 assumes that you'll stay until there's nuclear exchanges between Russia and NATO. Zvi was at 75% if you leave as soon as a conventional war between NATO and Russia starts.

I'm not sure how to compare that situation with the current situation, where it seems more likely that the next escalatory step will be a nuke on a non-NATO target than conventional NATO-Russia warfare. But if you're happy to leave as soon as either a nuke is dropped anywhere or conventional NATO/Russia warfare breaks out, I'm inclined to aggregate those numbers  to something closer to 75% than 50%.

Thanks for doing this!

In this squiggle you use "ableToEscapeBefore = 0.5". Does that assume that you're following the policy "escape if you see any tactical nuclear weapons being used in Ukraine"? (Which someone who's currently on the fence about escaping London would presumably do.)

If yes, I would have expected it to be higher than 50%. Do you think very rapid escalation is likely, or am I missing something else?

I think this particular example requires an assumption of logarithmically diminishing returns, but is right with that.

(I think the point about roughly quadratic value of information applies more broadly than just for logarithmically diminishing returns. And I hadn't realised it before. Seems important + underappreciated!)

One quirk to note: If a funder (who I want to be well-informed) is 50/50 on S vs L, but my all-things-considered belief is 60/40, then I would value the first 1% they shift towards my position much more than they do (maybe 10x more?)  and will put comparatively little value on shifting them all the way (ie the last percent from 59% to 60% is much less important). You can get this from a pretty similar argument as in the above example.

(In fact, the funder's own much greater valuation of shifting 10% than 1% can be seen as a two-step process where (i) they shift to 60/40 beliefs, and then (ii) they first get a lot of value from shifting their allocation from 50 to 51, then slightly less from shifting from 51 to 52, etc...)

I think that's right other than that weak upvotes never become worth 3 points anymore (although this doesn't matter on the EA forum, given that no one has 25,000 karma), based on this lesswrong github file linked from the LW FAQ.

Nitpicking:

A property of making directional claims like this is that MacAskill always has 50% confidence in the claim I’m making, since I’m claiming that his best-guess estimate is too high/low.

This isn't quite right. Conservation of expected evidence means that MacAskill's current probabilities should match his expectation of the ideal reasoning process. But for probabilities close to 0, this would typically imply that he assigns higher probability to being too high than to being too low. For example: a 3% probability is compatible with 90% probability that the ideal reasoning process would assign probability ~0% and a 10% probability that it would assign 30%. (Related.)

This is especially relevant when the ideal reasoning process is something as competent as 100 people for 1000 years. Those people could make a lot of progress on the important questions (including e.g. themselves working on the relevant research agendas just to predict whether they'll succeed), so it would be unsurprising for them to end up much closer to 0% or 100% than is justifiable today.

The term "most important century" pretty directly suggests that this century is unique, and I assume that includes its unusually large amount of x-risk (given that Holden seems to think that the development of TAI is both the biggest source of x-risk this century and the reason for why this might be the most important century).

Holden also talks specifically about lock-in, which is one way the time of perils could end.

See e.g. here:

It's possible, for reasons outlined here, that whatever the main force in world events is (perhaps digital people, misaligned AI, or something else) will create highly stable civilizations with "locked in" values, which populate our entire galaxy for billions of years to come.

If enough of that "locking in" happens this century, that could make it the most important century of all time for all intelligent life in our galaxy.

I want to roughly say that if something like PASTA is developed this century, it has at least a 25% chance of being the "most important century" in the above sense.

The page for the Century Fellowship outlines some things that fellows could do, which are much broader than just university group organizing:

When assessing applications, we will primarily be evaluating the candidate rather than their planned activities, but we imagine a hypothetical Century Fellow may want to:

  • Lead or support student groups relevant to improving the long-term future at top universities
  • Develop a research agenda aimed at solving difficult technical problems in advanced deep learning models
  • Start an organization that teaches critical thinking skills to talented young people
  • Run an international contest for tools that let us trace where synthetic biological agents were first engineered
  • Conduct research on questions that could help us understand how to to make the future go better
  • Establish a publishing company that makes it easier for authors to print and distribute books on important topics

Partly this comment exists just to give readers a better impression of the range of things that the century fellowship could be used for. For example, as far as I can tell, the fellowship is currently one of very few options for people who want to pursue fairly independent longtermist research and who want help with getting work authorization in the UK or US.

But I'm also curious if you have any comments on the extent to which you expect the century fellowship to take on community organizers vs researchers vs ~entrepeneurs. (Is the focus on community organizing in this post indicative, or just a consequence of the century fellowship being mentioned in a post that's otherwise about community organizing?)

I'm not saying it's infinite, just that (even assuming it's finite) I assign non-0 probability to different possible finite numbers in a fashion such that the expected value is infinite. (Just like the expected value of an infinite st petersburg challenge is infinite, although every outcome has finite size.)

The topic under discussion is whether pascalian scenarios are a problem for utilitarianism, so we do need to take pascalian scenarios seriously, in this discussion.

Load More