Harrison Durland

1814 karmaJoined Sep 2020



    Sorted by New


    Topic Contributions

    I have increasingly become open to incorporating alternative decision theories as I recognize that I cannot be entirely certain in expected value approaches, which means that (per expected value!) I probably should not solely rely on one approach. At the same time, I am still not convinced that there is a clear, good alternative, and I also repeatedly find that the arguments against using EV are not compelling (e.g., due to ignoring more sophisticated ways of applying EV).

    Having grappled with the problem of EV-fanaticism for a long time in part due to the wild norms of competitive policy debate (e.g., here, here, and here), I've thought a lot about this, and I've written many comments on the forum about this. My expectation is that this comment won't gain sufficient attention/interest to warrant me going through and collecting all of those instances, but my short summary is something like:

    • Fight EV fire with EV fire: Countervailing outcomes—e.g., the risk that doing X has a negative 999999999... effect—are extremely important when dealing with highly speculative estimates. Sure, someone could argue that if you don't give $20 to the random guy wearing a tinfoil hat and holding a remote which he will use to destroy 3^3^3 galaxies there's at least a 0.000000...00001% chance he's telling the truth, but there's also a decent chance that doing this could have the opposite effect due to some (perhaps hard-to-identify) alternative effect.
    • One should probably distinguish between extremely low (e.g., 0.00001%) estimates which are the result of well-understood or ""objective""[1] analyses which you expect cannot be improved by further analysis or information collection (e.g., you can directly see/show the probability written in a computer program, a series of coin flips with a fair coin) vs. such estimates that are the result of very subjective estimates probability estimates that you expect you will likely adjust downwards due to further analysis, but where you just can't immediately rule out some sliver of uncertainty.[2]
      • Often you should recognize that when you get into small probability spaces for ""subjective"" questions, you are at a very high risk of being swayed by random noise or deliberate bias in argument/information selection—for example, if you've never thought about how nano-tech could cause extinction and listen to someone who gives you a sample of arguments/information in favor of the risks, you likely will not immediately know the counterarguments and you should update downwards based on the expectation that the sample you are exposed to is probably an exaggeration of the underlying evidence.
      • The cognitive/time costs of doing ""subjective"" analyses likely imposes high opportunity costs (going back to the first point);
      • When your analysis is not legible to other people, you risk high reputational costs (again, which goes back to the first point).
    • Based on the above, I agree that in some cases it may be a far more efficient heuristic for decision-making under analytical constraints to use heuristics like trimming off highly-""subjective"" risk estimates. However, I make this claim based on EV with the recognition that it is still a better general-purpose decision-making algorithm, but which may just not be optimized for application under realistic constraints (e.g., other people not being familiar with your method of thinking, short amount of time for discussion or research, error-prone brains which do not reliably handle lots of considerations and small numbers).[3]
    1. ^

      I dislike using "objective" and "subjective" to make these distinctions, but for simplicity's sake / for lack of a better alternative at the moment, I will use them.

    2. ^
    3. ^

      I advocate for something like this competitive policy debate, since "fighting EV fire with EV fire" risks "burning the discussion"—including the educational value, reputation of participants, etc. But most deliberations do not have to be made within the artificial constraints of competitive policy debate.

    Since I think substantial AI regulation will likely occur by default, I urge effective altruists to focus more on ensuring that the regulation is thoughtful and well-targeted rather than ensuring that regulation happens at all.

    I think it would be fairly valuable to see a list of case studies or otherwise create base rates for arguments like “We’re seeing lots of political gesturing and talking, so this suggests real action will happen soon.” I am still worried that the action will get delayed, watered down, and/or diverted to less-existential risks, only for the government to move on to the next crisis. But I agree that the past few weeks should be an update for many of the “government won’t do anything (useful)” pessimists (e.g., Nate Soares).

    I definitely would have preferred a TLDR or summary at the top, not the bottom. However, I definitely appreciate your investigation into this, as I have long loathed Eliezer’s use of the term once I realized he just made it up.

    Strange, unless the original comment from Gerald has been edited since I responded I think I must have misread most of the comment, as I thought it was making a different point (i.e., "could someone explain how misalignment could happen"). I was tired and distracted when I read it, so it wouldn't be surprising. However, the final paragraph in the comment (which I originally thought was reflected in the rest of the comment) still seems out of place and arrogant.

    This is a test regarding comment edit history. This comment has been edited post-publication.

    This really isn’t the right post for most of those issues/questions, and most of what you mentioned are things you should be able to find via searches on the forum, searches via Google, or maybe even just asking ChatGPT to explain it to you (maybe!). TBH your comment also just comes across quite abrasive and arrogant (especially the last paragraph), without actually appearing to be that insightful/thoughtful. But I’m not going to get into an argument on these issues.

    [This comment is no longer endorsed by its author]Reply

    I wish! I’ve been recommending this for a while but nobody bites, and usually (always?) without explanation. I often don’t take seriously many of these attempts at “debate series” if they’re not going to address some of the basic failure modes that competitive debate addresses, e.g., recording notes in a legible/explorable way to avoid the problem of arguments getting lost under layers of argument branches.

    Hi Oisín, no worries, and thanks for clarifying! I appreciate your coverage of this topic, I just wanted to make sure there aren't misinterpretations.

    In policy spaces, this is known as the Brussels Effect; that is, when a regulation adopted in one jurisdiction ends up setting a standard followed by many others.

    I am not clear how the Brussels effect applies here, especially since we’re not talking manufacturing a product with high costs of running different production lines. I recognize there may be some argument/step that I’m missing, but I can’t dismiss the possibility that the author doesn’t actually understand what the Brussels Effect really is / normally does, and is throwing it around like a buzzword. Could you please elaborate a bit more?

    I’m curious whether people (e.g., David, MIRI folk) think that LLMs now or in the near future would be able to substantially speed up this kind of theoretical safety work?

    Load more