I think it's great that you're asking for support rather than facing existential anxiety alone, and I'm sorry that you don't seem to have people in your life who will take your worries seriously and talk through them with you. And I'm sure everyone responding here means well and wants the best for you, but joining the Forum has filtered us—whether for our worldviews, our interests, or our susceptibility to certain arguments. If we're here for reasons other than AI, then we probably don't mind talk of doom or are at least too conflict-averse to continually barge into others' AI discussions.
So I would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you're in doubt and perseverating on thoughts of Hell. You don't have to listen to any of us, and you wouldn't have to even if we really were all smarter than you.
You point out the XPT forecasts. I think that's a great place to start. It's hard to argue that a non-expert ought to defer more to AI-safety researchers than to either the superforecasters or the expert group. Having heard from XPT participants, I don't think the difference between them and people more pessimistic about AI risk has to do with facility with technical or philosophical details. This matches my experience reading deep into existential risk debates over the years—they don't know anything I don't. They mostly find different lines of argument more or less persuasive.
I don't have first-hand advice to give on living with existential anxiety. I think the most important thing is to take care of yourself, even if you do end up settling on AI safety as your top priority. A good therapist might have helpful ideas regarding rumination and feelings of helplessness, which aren't required responses to any beliefs about existential risk.
I'm glad to respond to comments here, but please feel free to reach out privately as well. (That goes for anyone with similar thoughts who wants to talk to someone familiar with AI discussions but unpersuaded about risk.)
Seeing some conversations about lack of social graces as a virtue reminded me that I wanted to say a few things in praise of professionalism.
By "professionalism" I mean a certain kind of forthrightness and constancy. The professional has an aura of assurance that judgment falls on the work, not the person. They compartmentalize. They project clearly defined boundaries. They do the sorts of things professionals do.
Professionalism is not a subculture. The professional has no favorites or secret handshakes. Or, at its best, professionalism is the universal secret handshake, even if it's only a handshake offered, as it were.
You will be treated with respect and consideration, but professionalism is not a virtue of compassion, nor even generosity. It might have been a virtue of justice. It is a virtue of curiosity.
I also think it overlaps surprisingly with professionalism as generally conceived.
Very clear piece!
In this framework, the value of our future is equal to the area under this curve and the value of altering our trajectory is equal to the area between the original curve and the altered curve.
You mentioned optimal planning in economics, and I've wondered whether an optimal control framework might be useful for this sort of analysis. I think the difference between optimal control and the trajectory-altering framework you describe is a bit deeper than the different typical domains. There's not just one decision to be made, but a nearly-continuous series of decisions extending through the future (a "policy"). Under uncertainty, the present expected value is the presently realized ("instantaneous") value plus the expectation taken over different futures of the expected value of those futures. Choosing a policy to maximize expected realized value is the control problem.
For the most part, you get the same results with slightly different interpretation. For example, rather than impose some τ, you get an effective lifetime that's mainly determined by "background" risk but also is allowed to vary based on policy.
One thing that jumped out at me in a toy model is that while the value of reducing existential risk is mathematically the same as an "enhancement" (multiplicative), the time at which we expect to realize that extra value can be very different. In particular, an expected value maximizer may heavily backload the realization of value (even beyond the present expected survival time) if they can neglect present value to expand while reducing existential risk.
I suspect one could learn or make clearer a few other interesting things by following those lines.
That's right. (But lower is better for some other common scoring rules, including the Brier score.)
Social dynamics seem important, but I also think Scott Alexander in "Epistemic Learned Helplessness" put his finger on something important in objecting to the rationalist mission of creating people who would believe something once it had been proven to them. Together with "taking ideas seriously"/decompartmentalization, attempting to follow the rules of rationality itself can be very destabilizing.
Is there a canonical discussion of what you call "race dynamics" somewhere? I can see how proliferating firms and decentralized communities would "mak[e] potential moratoriums on capabilities research much harder (if not impossible) to enforce", but it's less clear to me what that means for how quickly capabilities advance. Is there evidence that, say, the existence of Anthropic has led to increased funding for OpenAI?
In particular, one could make the opposite argument—competition, at least intra-nationally, slows the feedback cycle for advancing capabilities. For example, a lot of progress in information technology seems to have been driven by concentration of R&D into Bell Labs. If the Bell monopoly had been broken up sooner, would that have accelerated progress? If some publicly-funded entity had provided email and internet search services, would Google have reached the same scale?
Meanwhile, training leading-edge models is capital intensive, and competing firms dilute available funding across many projects. Alternative commercial and open-source models drive potential margins down. Diminished prospects for monopoly limit the size and term of bets that investors are willing to make.
I don't know which way the evidence actually falls, but there seems to be a background assumption that competition, race dynamics, and acceleration of progress on capabilities always go hand in hand. I'd be very interested to read more detailed justifications for that assumption.
(Here's my submission—I make some similar points but don't do as much to back them up. The direction is more like "someone should try taking this sort of thing into account"—so I'm glad you did!)
I'd have to think more carefully about the probabilities you came up with and the model for the headline number, but everything else you discuss is pretty consistent with my view. (I also did a PhD in post-silicon computing technology, but unlike Ted I went right into industry R&D afterwards, so I imagine I have a less synoptic view of things like supply chains. I'm a bit more optimistic, apparently—you assign <1% probability to novel computing technologies running global-scale AI by 2043, but I put down a full percent!)
The table "Examples transistor improvements from history (not cherry-picked)" is interesting. I agree that the examples aren't cherry picked, since I had nearly the same list (I decided to leave out lithography and included STI and the CFET on imec's roadmap), but you could choose different prototype dates depending on what you're interested in.
I think you've chosen a fairly relaxed definition for "prototype", which is good for making the point that it's almost certain that the transistors of 2043 will use a technology we already have a good handle on, as far as theoretical performance is concerned.
Another idea would be to follow something like this IRDS table that splits out "early invention" and "focused research". They use what looks like a stricter interpretation of invention—they don't explain further or give references, but I suspect they just have in mind more similarity to the eventual implementation in production. (There are still questions about what counts, e.g., 1987 for tri-gate or 1998 for FinFET?) That gives about 10–12 years from focused research to volume production.
So even if some unforseeable breakthrough is more performant or easily scalable than what we're currently thinking about, it still looks pretty tough to get it out by 2043.
I think FQxI usually gets around 200 submissions for its essay contests where the entire pot is less than the first prize here. I wouldn't be surprised if Open Phil got over 100 submissions.
Employer-organized happy hours and other social events are often careful to have non-alcoholic options for this reason (among others like inclusivity).