I see, thanks! Section 8.2, "Gray Dust":
The requirement for elements that are relatively rare in the atmosphere greatly constrains the potential nanomass and growth rate of airborne replicators. However, note that at least one of the classical designs exceeds 91% CHON by weight. Although it would be very difficult, it is at least theoretically possible that replicators could be constructed almost solely of CHON, in which case such devices could replicate relatively rapidly using only atmospheric resources, powered by sunlight.
I do think "diamondoid bacteria, that replicate with solar power and atmospheric CHON" from List of Lethalities is original to Eliezer. He's previously cited Nanomedicine in this context, but the parts published online so far don't describe self-replicating systems.
Edit: This is wrong—see Lumpyproletariat below.
With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
Much of the (purported) advantage of diamondoid mechanisms is that they're (meant to be) stiff enough to operate deterministically with atomic precision. Without that, you're likely to end up much closer to biological systems—transport is more diffusive, the success of any step is probabilistic, and you need a whole ecosystem of mechanisms for repair and recycling (meaning the design problem isn't necessarily easier). For anything that doesn't specifically need self-replication for some reason, it'll be hard to beat (e.g.) flow reactors.
For example, would organising an after-work drinks count as "promoting drugs among coworkers, including alcohol"?
Employer-organized happy hours and other social events are often careful to have non-alcoholic options for this reason (among others like inclusivity).
I think it's great that you're asking for support rather than facing existential anxiety alone, and I'm sorry that you don't seem to have people in your life who will take your worries seriously and talk through them with you. And I'm sure everyone responding here means well and wants the best for you, but joining the Forum has filtered us—whether for our worldviews, our interests, or our susceptibility to certain arguments. If we're here for reasons other than AI, then we probably don't mind talk of doom or are at least too conflict-averse to continually barge into others' AI discussions.
So I would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you're in doubt and perseverating on thoughts of Hell. You don't have to listen to any of us, and you wouldn't have to even if we really were all smarter than you.
You point out the XPT forecasts. I think that's a great place to start. It's hard to argue that a non-expert ought to defer more to AI-safety researchers than to either the superforecasters or the expert group. Having heard from XPT participants, I don't think the difference between them and people more pessimistic about AI risk has to do with facility with technical or philosophical details. This matches my experience reading deep into existential risk debates over the years—they don't know anything I don't. They mostly find different lines of argument more or less persuasive.
I don't have first-hand advice to give on living with existential anxiety. I think the most important thing is to take care of yourself, even if you do end up settling on AI safety as your top priority. A good therapist might have helpful ideas regarding rumination and feelings of helplessness, which aren't required responses to any beliefs about existential risk.
I'm glad to respond to comments here, but please feel free to reach out privately as well. (That goes for anyone with similar thoughts who wants to talk to someone familiar with AI discussions but unpersuaded about risk.)
Seeing some conversations about lack of social graces as a virtue reminded me that I wanted to say a few things in praise of professionalism.
By "professionalism" I mean a certain kind of forthrightness and constancy. The professional has an aura of assurance that judgment falls on the work, not the person. They compartmentalize. They project clearly defined boundaries. They do the sorts of things professionals do.
Professionalism is not a subculture. The professional has no favorites or secret handshakes. Or, at its best, professionalism is the universal secret handshake, even if it's only a handshake offered, as it were.
You will be treated with respect and consideration, but professionalism is not a virtue of compassion, nor even generosity. It might have been a virtue of justice. It is a virtue of curiosity.
I also think it overlaps surprisingly with professionalism as generally conceived.
Very clear piece!
In this framework, the value of our future is equal to the area under this curve and the value of altering our trajectory is equal to the area between the original curve and the altered curve.
You mentioned optimal planning in economics, and I've wondered whether an optimal control framework might be useful for this sort of analysis. I think the difference between optimal control and the trajectory-altering framework you describe is a bit deeper than the different typical domains. There's not just one decision to be made, but a nearly-continuous series of decisions extending through the future (a "policy"). Under uncertainty, the present expected value is the presently realized ("instantaneous") value plus the expectation taken over different futures of the expected value of those futures. Choosing a policy to maximize expected realized value is the control problem.
For the most part, you get the same results with slightly different interpretation. For example, rather than impose some τ, you get an effective lifetime that's mainly determined by "background" risk but also is allowed to vary based on policy.
One thing that jumped out at me in a toy model is that while the value of reducing existential risk is mathematically the same as an "enhancement" (multiplicative), the time at which we expect to realize that extra value can be very different. In particular, an expected value maximizer may heavily backload the realization of value (even beyond the present expected survival time) if they can neglect present value to expand while reducing existential risk.
I suspect one could learn or make clearer a few other interesting things by following those lines.
That's right. (But lower is better for some other common scoring rules, including the Brier score.)
Social dynamics seem important, but I also think Scott Alexander in "Epistemic Learned Helplessness" put his finger on something important in objecting to the rationalist mission of creating people who would believe something once it had been proven to them. Together with "taking ideas seriously"/decompartmentalization, attempting to follow the rules of rationality itself can be very destabilizing.
Congrats to the winners! It's interesting to see how surprised people are. Of these six, I think only David Wheaton on deceptive alignment was really on my radar. Some other highlights that didn't get much discussion: