I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
4-7% real investment return assumes no TAI. TAI would speed up R&D on meat alternatives, but it would also speed up R&D on everything else. Cost-effectiveness of animal activism would go up in an environment where the cost-effectiveness of everything is going up, and the market rate of return is going up.
When clean meat arrives (if it does), the movement will need skilled campaigners, policy expertise, organisational infrastructure, relationships with policymakers, experienced leadership, and research to understand this whole TAI situation.
I don't think this line of reasoning gives proper consideration to what TAI actually is. It's an intelligence that surpasses almost all humans and can replace almost all human labor. All the jobs listed in that quote can be done cheaper and better by TAI than by humans. The possible exceptions are organizational infrastructure and relationships with policymakers, where connections matter much more than raw ability. Humans have a headstart on connections, so it will take longer before TAI can replace humans. (Even then, how much longer? Maybe 2–5 years? We're not talking about decades.) But also consider that your human relationships with policymakers don't matter if policymakers themselves are replaced by TAI. Even if they're not de jure replaced, it's very likely that human policymakers will become figureheads with TAI making all the decisions. That's all assuming TAI doesn't cause human extinction, which is the more likely outcome.
What is the 2026 "Summit on Existential Security"? When I look it up (in quotes), this article is the only result. I did find some stuff about a 2023 Summit on Existential Security but nothing from any other year. Is this something that happened at EA Global?
I don't put much credence in anonymous AI safety "experts", for reasons I elaborated on here.
I just don't see how it could be remotely possible that we (earth evolved humans and animals) are efficient utility producers (by a wide range of definitions of "utility").
The standard argument is that an aligned/ethical ASI ought to leave humans on earth and use the rest of the universe to make efficient utility producers; giving up ~0.00000000000000000000000000000000000001% of the lightcone is easily worth it on moral uncertainty grounds. (I think that's approximately the right number of zeroes, based on Bostrom's numbers from Astronomical Waste.)
But also, if the CEV of human values involves killing all humans, then doesn't that kinda mean killing all humans is the correct thing to do? (Which seems like a weird conclusion but it's also a weird premise)
(for the record, I don't want to kill all humans, I quite want to live as a matter of fact, although I also don't think a lightcone filled with humans is the best possible future)
Throughout this post, we use "AGI" to refer to AI systems with broad generality and high capability—roughly levels 2 to 4 on Google DeepMind's Levels of AGI framework (p. 5).
This post seems to rely on the premise that there will be a large time gap between AGI and ASI, or DeepMind's capability levels 4 and 5 ("at least 99th percentile of skilled adults" vs. "outperforms 100% of humans"). Unless society deliberately decides to stop AI development, it seems unlikely that there would be large gap between AGI and ASI. ASI would render most/all of the identified bottlenecks irrelevant, e.g. "regulators" and "political opposition" become meaningless in the face of superintelligence.
Even if AI is "merely" as smart as the 99th percentile human, once AI has the ability to do 99th-percentile work for very cheap with arbitrarily many copies in parallel, it seems likely that the political and governmental system as we know it would cease to exist. At minimum, we'd see close to a 100% unemployment rate. It seems very hard to make claims like "political opposition would slow down cultivated meat" when you're talking about a world with 100% unemployment.
This report is not alone in taking this perspective. A big problem I see with a lot of these kinds of analyses (especially in the animal welfare space) is that they are trying to analyze a world where AI is better at everything than the majority of humans, and yet the political/social/economic environment is basically unchanged. I don't see how that would happen.
The "measurable outcomes" thing is distinct from Moloch IMO, but I do think it's an important factor in why AI safety work has been less impactful than it could've been: people are spending too much effort on Streetlight Effect research (this is especially true of AI-company-funded safety research).
Please do not cite this article as evidence. It's pro-Anthropic propaganda, not a serious argument.