L

Linch

@ EA Funds
25679 karmaJoined Working (6-15 years)openasteroidimpact.org

Posts
72

Sorted by New
8
Linch
· · 1m read
22

Comments
2730

The former. I think it should be fairly intuitive if you think about the shape of the distribution you're drawing from. Here's the code, courtesy of Claude 3.5. [edit: deleted the quote block with the code because of aesthetics, link should still work].

I think Toby's use of "evenly split" is a bit of a stretch in 2024 with the information available, but lab leak is definitely still plausible. To quote Scott in the review:

Fourth, for the first time it made me see the coronavirus as one of God’s biggest and funniest jokes. Think about it. Either a zoonotic virus crossed over to humans fifteen miles from the biggest coronavirus laboratory in the Eastern Hemisphere. Or a lab leak virus first rose to public attention right near a raccoon-dog stall in a wet market. Either way is one of the century’s biggest coincidences, designed by some cosmic joker who wanted to keep the debate [...] acrimonious for years to come.

I think lab leak is now a minority position among people who looked into it, but it's not exactly a fringe view. I would guess at least some US intelligence agencies still think lab leak is more likely than not, for example.

And, like, idk, man.  130 is pretty smart but not "famous for their public intellectual output" level smart.

Yeah "2 sds just isn't that big a deal" seems like an obvious hypothesis here ("People might over-estimate how smart they are" is, of course, another likely hypothesis).

Also of course OP was being overly generous by assuming that it's a normal distribution centered around 128. If you take a bunch of random samples of a normal distribution, and only look at subsamples with median 2 sds out, in approximately ~0 subsamples will you find it equally likely to see + 0 sds and +4 sds. 

Thank you for the article! I've been skeptical of the general arguments for progress (from a LT perspective) for several years but never managed to articulate a model quite as simple/clear as yours. 

For instance, they might be able to lengthen our future by bringing forward the moment we develop technologies to protect us from natural threats to our survival. This is an intriguing idea, but there are challenges to getting it to work: especially since there isn’t actually much natural extinction risk for progress to reduce, whereas progress appears to be introducing larger anthropogenic risks
...
There might be other claims about the value of progress that are unaffected. For example, these considerations don’t directly undermine the argument that progress is better than stasis, and thus that if progress is fragile, we need to protect it. That may be true even if humanity does eventually bring about its own end, and even if our progress brings it about sooner.

When I've had these debates with people before, the most compelling argument for me (other than progress being good under a wide range of commonsensical assumptions + moral/epistemic uncertainty) is a combination of these arguments plus a few subtleties. It goes something like this:

  1. Progress isn't guaranteed
  2. It's not actually very plausible/viable to maintain a society that's ~flat, in practice you're either going forwards or backwards.
  3. Alternatively, people paint a Red Queen story where work towards advancement is necessary to fight the natural forces of decline. If you (speaking broadly) don't meaningfully advance, civilizational decline will set in, likely in the span of mere decades or centuries
    1. Sometimes people point to specific issues, eg institiutional rot in specific societies. Or global demographic decline which need to be countered by either greater per-capita productivity or some other way of creating more minds (most saliently via AGI, though maybe you can also do artificial wombs or something)
  4. Regression makes humanity more vulnerable to both exogenous risks (supervolcanoes etc) and endogenous risks (superweapon wars, mass ideological capture by suicidal memes, fertility crisis)
    1. (relatedly, subarguments here about why rebuilding is not guaranteed)
  5. Therefore we need to advance society (technologically, economically, maybe in other ways) to survive, at least until the point where we have a stable society that doesn't need to keep advancing to stay safe.

This set of arguments only establishes that undifferentiated progress is better than no progress. They do not by themselves directly argue against differential technological progress. However, people who ~roughly believe the above set of arguments will probably say that differential technological progress (or differential progress in general) is a good idea in theory but not realistic except for a handful of exceptions (like banning certain forms of gain-of-function research in virology). For example, they might point to Hayekian difficulties with central planning and argue that in many ways differential technological progress is even more difficult than the traditional issues that attempted central planners have faced. 

On balance, I did not find their arguments overall convincing but it's pretty subtle and I don't currently think the case against "undifferentiated progress is exceptionally good" is a slam dunk[1], like I did a few years ago. 

  1. ^

    In absolute terms. I think there's a stronger case that marginal work on preventing x-risk is much more valuable in expectation. Though that case is not extremely robust because of similar sign-error issues that plague attempted x-risk prevention work. 

This sounds awesome at first blush, would love to see it battle-tested.

I edited my comment for clarity.

The recently released 2024 Republican platform said they'll repeal the recent White House Executive Order on AI, which many in this community thought is a necessary first step to make future AI progress more safe/secure. This seems bad.

Artificial Intelligence (AI) We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.

From https://s3.documentcloud.org/documents/24795758/read-the-2024-republican-party-platform.pdf, see bottom of pg 9.

Hmm I guess I wouldn't be that surprised if we observed similar levels of what you call "dysfunction" in the US. Earlier you asked:

Framed differently, what would it take for you to accept the same thing for yourself -- that an organization is hundreds of times better at helping you than you are at helping yourself?

I guess the intuitive plausibility of this is rather low (or perhaps I have an overly high opinion of myself) if the problem is framed as one of my own rationality, but I can much more easily buy that there are collective action problems that benefit "people like me" at >100x the costs. 

Hi Jason, great question! You and/or potential donors and/or potential grantees can look at the marginal grants writeup [1]that Kieran and Neil put together last December. I don't think the bar has changed significantly in the last 6 months, though any AWF fund manager is free to correct me. 

To answer your exact question, I don't have a quantitative sense of how much better the highlighted grants are, compared to the marginal grants. I don't think it's strictly necessary here to have a cardinal ranking, because (as you've identified) exactly where the marginal grants are matters noticeably more than the number of times the best grants are better than the marginal ones. 

  1. ^

    (click through to the link, the headline is for a fundraising post but the section I linked detailed marginal grants).

On LW, I thought comments here were very poor, with a few half-exceptions. It wasn't even a controversial topic! 

On EAF, I pragmatically am not that interested in either starting new fights, or relitigating past ones. I will say that making my comment here solely about kindness, rather than kindness and epistemics, was a tactical decision. 

Load more