Muireall

34Joined Jun 2022

Comments
7

My thoughts on nanotechnology strategy research as an EA cause area

Note: I've edited this comment after dashing it off this morning, mainly for clarity.

Sure, that all makes sense. I'll think about spending some more time on this. In the meantime I'll just give my quick reactions:

  • On reluctance to be extremely confident—I start to worry when considerations like this dictate that one give a series of increasingly specific/conjunctive scenarios roughly the same probability. I don't expect a forum comment or blog post to get someone to such high confidence, but I don't think it's beyond reach.
  • We also have different expectations for AI, which may in the end make the difference.
  • I don't expect machine learning to help much, since the kinds of structures in question are very far out of domain, and physical simulation has some intrinsic hardness problems.
  • I don't think it's correct to say that we haven't tried yet.
  • Some of the threads I would pull on if I wanted to talk about feasibility, after a relatively recent re-skim:
    • We've done many simulations and measurements of nanoscale mechanical systems since 1992. How does Nanosystems hold up against those?
    • For example, some of the best-case bearings (e.g. multi-walled carbon nanotubes) seem to have friction worse than Drexler's numbers by orders of magnitude. Why is that?
    • Edges also seem to be really important in nanoscale friction, but this is a hard thing to quantify ab initio.
    • I think there's an argument using the Akhiezer limit on  products that puts tighter upper bounds on dissipation for stiff components, at least at "moderate" operating speeds. This is still a pretty high bound if it can be reached, but dissipation (and cooling) are generally weak points in Nanosystems.
    • I don't recall discussion of torsional rigidity of components. I think you can get a couple orders of magnitude over flagellar motors with CNTs, but you run into trouble beyond that.
    • Nanosystems mainly considers mechanical properties of isolated components and their interfaces. If you look at collective motion of the whole, everything looks much worse. For example, stiff 6-axis positional control doesn't help much if the workpiece has levered fluctuations relative to the assembler arm.
    • Similarly, in collective motion, non-bonded interfaces should be large contributors to phonon radiation and dissipation.
    • Due to surface effects, just about anything at the nanoscale can be piezoelectric/flexoelectric with a strength comparable to industrial workhorse bulk piezoelectrics. This can dramatically alter mechanical properties relative to the continuum approximation. (Sometimes in a favorable direction! But it's not clear how accurate simulations are, and it's hard to set up experiments.)
    • Current ab initio simulation methods are accurate only to within a few percent on "easy" properties like electric dipole moments (last I checked). Time-domain simulations are difficult to extend beyond picoseconds. What tolerances do you need to make reliable mechanisms?
  • In general I wouldn't be surprised if a couple orders of magnitude in productivity over biological systems were physically feasible for typically biological products (that's closer to my 1% by 2040 scenario). Broad-spectrum utility is much harder, as is each further step in energy efficiency or speed.
My thoughts on nanotechnology strategy research as an EA cause area

I found this post helpful, since lately I've been trying to understand the role of molecular nanotechnology in EA and x-risk discussions. I appreciate your laying out your thinking, but I think full-time effort here is premature.

Overall, then, adding the above probabilities implies that my guess is that there’s a 4-5% chance that advanced nanotechnology arrives by 2040. Again, this number is very made up and not stable.

This sounds astonishingly high to me (as does 1-2% without TAI). My read is that no research program active today leads to advanced nanotechnology by 2040. Absent an Apollo program, you'd need several serial breakthroughs from a small number of researchers. Echoing Peter McCluskey's comment, there's no profit motive or arms race to spur such an investment. I'd give even a megaproject slim odds—all these synthesis methods, novel molecules, assemblies, information and power management—in the span of three graduate student generations? Simulations are too computationally expensive and not accurate enough to parallelize much of this path. I'd put the chance below 1e-4, and that feels very conservative.

Here’s a quick attempt to brainstorm considerations that seem to be feeding into my views here: "Drexler has sketched a reasonable-looking pathway and endpoint", "no-one has shown X isn't feasible even though presumably some people tried"

Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.

So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.

AGI Ruin: A List of Lethalities

Interesting, thanks. I read Nanosystems as establishing a high upper bound. I don't see any of its specific proposals as plausibly workable enough to use as a lower bound in the sense that, say, a ribosome is a lower bound, but perhaps that's not what Eliezer means.

AGI Ruin: A List of Lethalities

The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.

It sounds like this is well-traveled ground here, but I'd appreciate a pointer to this analysis.

Open Thread: Spring 2022

I added a more mathematical note at the end of my post showing what I mean by (2). I think in general it's more coherent to treat trajectory problems with dynamic programming methods rather than try to integrate expected value over time.

Open Thread: Spring 2022

I'll answer my own question a bit:

  • Scattered critiques of longtermism exist, but are generally informal, tentative, and limited in scope. This recent comment and its replies were the best directory I could find.
  • A longtermist critique of "The expected value of extinction risk reduction is positive", in particular, seems to be the best expression of my worry (1). My points about near-threshold lives and procrastination are another plausible story by which extinction risk reduction could be negative in expectation.
  • There's writing about Pascalian reasoning (a couple that came up repeatedly were A Paradox for Tiny Probabilities and Enormous Values, In defence of fanaticism).
  • I vaguely recall a named paradox, maybe involving "procrastination" or "patience", about how an immortal investor never cashes in—and possibly that this was a standard answer to Pascal's wager/mugging together with some larger (but still tiny) probability of, say, getting hit by a meteor while you're making the bet. Maybe I just imagined it.
Open Thread: Spring 2022

Hi, everyone, I'm Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven't been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I'd appreciate any thoughts you have.

The idea is that, broadly, if you accept the repugnant conclusion with a "high" threshold (some people consensually alive today don't meet the "barely worth living" line), I think your expected utility for the longterm future has to take a big hit from negative scenarios. From that perspective, not only is it likely that future civilization will—as do, apparently, we—mistake negative for positive welfare, but also welfare should be put on hold (since apparently near-threshold lives can be productive) as they too invest in favor of the distant intergalactic future (until existential catastrophe comes for them).

In other words, I worry (1) expected-total-utility motivations for longtermism underrate very bad outcomes, and (2) these motivations can put you in the position of continually making Pascalian bets long enough to all but guarantee gambler's ruin before realizing your astronomical potential value.