Hide table of contents

Specifically, interested in examples of technologies where a) people who actively worked on it genuinely believed that if they can do it once, it'd (eventually) be very cheap and scalable, b) we eventually did develop the technology, c) it ended up not being a big deal anyway because it wasn't very scalable and d) in hindsight, we still believe that had the technology been cheap and scalable, it would have been a pretty big deal.

Also would love to see statistics on how frequently this has happened in history, though might be pretty hard to draw a reference class well.

I think a lot of EAs have the implicit assumption that if something can be done once, you can probably (eventually) do it cheaply and at scale. For very EA relevant technologies, I think many people think this applies to human-level AI, anti-aging treatments, and cultured meat.

I'm curious how frequently counterexamples happen. The only counterexample I'm aware of is alchemy. Transmuting base metals into gold was something multiple civilizations literally wanted to do for millennia, we eventually figured out how to do it last century, but the discovery turned out to be at best an academic curiosity since the energy requirements were too massive.

I'm interested in this question because having base rates on the scalability question would be really useful to form some very weak priors on P(technology radically changes civilization | radical-seeming technology was invented). For example, if, after an extensive search, alchemy was the only interesting example, we can conclude that our initial implicit assumption that "once you can do something, it can be scaled (EDIT: though political/cultural resistance can still mean it won't be scaled)" seems like a very safe one. 

New Answer
New Comment

8 Answers sorted by

For historical examples, consider:

One general theme = for any standard X that we use, there was probably a better standard Y that wasn't widely-used enough. 

Another general theme = when it comes to failed stuff, government archives are a great resource :D 

Scalability, or cost?

When I think of failure to scale, I don't just think of something with high cost (e.g. transmutation of lead to gold), but something that resists economies of scale.

Level 1 resistance is cost-disease-prone activities that haven't increased efficiency in step with most of our economy, education being a great example. Individual tutors would greatly increase results for students, but we can't do it. We can't do it because it's too expensive. And it's too expensive because there's no economy of scale for tutors - they're not like solar panels, where increasing production volume lets you make them more cheaply.

Level 2 resistance is adverse network effects - the thing actually becomes harder as you try to add more people. Direct democracy, perhaps? Or maintaining a large computer program? It's not totally clear what the world would have to be like for these things to be solvable, but it would be pretty wild; imagine if the difficulty of maintaining code scaled sublinearly with size!

Level 3 resistance is when something depends on a limited resource and if you haven't got it, you're out of luck. Stradivarius violins, perhaps. Or the element europium used in red-emitting phosphor for CRT tubes. Solutions to these, when possible, probably just look like better technology allowing a workaround.

Thanks, this was particularly useful for me!

(+2, I really like the breakdown of different effects. I haven't really tried critically analyzing it for issues, but I definitely feel like it helped carve out/prop up some initial ideas)

Going to the moon.

Fusion power?

Nuclear power more generally?

...I guess the problem with these examples is that they totally are scalable, they just didn't scale for political/cultural reasons.

I feel like your qualifying statement is only true of the last one?

I'm pretty confident that if loads more money and talent had been thrown at space exploration, going to the moon would be substantially cheaper and more common today. SpaceX is good evidence of this, for example. As for fusion power, I guess I've got a lot less evidence for that. Perhaps I am wrong. But it seems similar to me.  We could also talk about fusion power on the metric of "actually producing more energy than it takes in, sustainably" in which case my understanding is that we haven't got there at all yet.

I've been trying to think of good examples in military technology, but haven't thought of any great ones yet. However, one thing I thought about was the supposed "rods from god" idea of using what are (basically) oversized, high-density (tungsten) lawn darts dropped from space. These weapons could potentially have tactical-nuclear-level kinetic energy without any of the nuclear fallout or stigma (albeit an entirely different set of stigma/international condemnation). But IIRC it's not being scaled for a variety of reasons including "it's really dang expensive to put a lot of large tungsten rods into space."

However, that doesn't necessarily mean it couldn't eventually be scaled up if we, e.g., developed an effective space elevator. And that leads to a follow-up question: how do we distinguish between "can't scale up (.)" and "can't scale up (yet)"? I definitely think there are some instances where the difference would be clear, but I would similarly be interested to see cases where we thought "X technology doesn't have a future (due to competitor technology Y and/or physical limitation Z)" only to later discover/invent something that makes an altered form of X viable.

I would similarly be interested to see cases where we thought "X technology doesn't have a future (due to competitor technology Y and/or physical limitation Z)" only to later discover/invent something that makes an altered form of X viable.

I too would be interested in this, as a reference class. I think it would be a strategically important update for us if we were to conclude that there's a decent chance human-level AI or aging or cultured meat(or for that matter transmutation) is scientifically but not economically viable in the current form, but an entirely different route of getting there eventually becomes economically viable decades later. 

I think most technologies don't end up scaling. This says 2 to 10% of patents make enough money to maintain protection. A prototype is not required for a patent, but there would be lots of demonstrated ideas in the lab that are not patented. There is also the concept of the Valley of Death in commercialization where most technologies die. This is not necessarily the same as technologies that would be a "big deal" but I think it is a useful reference class.

This question is surprisingly hard... I can barely start thinking about very ordinary stuff like "automatized mailbox management..." Your "gold example" made me think about artificial diamonds, which are still regarded as less valuable than natural ones in jewelry - but that's because jewelry is a luxury / status good. It helps a bit to think about tech that sort of existed for very long and was only largely deployed in the last hundred years, like bicycles. I mean, we could have it since at least 18th Century, but they only appeared around 1840s, and somehow it only became a real option after the 1890s - when we already had trains and cars.

I think there's an ambiguity in "it'd eventually be very cheap and scalable."

Consider alchemy. It's cheaper to do now than it was when we first did it, in part because the price of energy has dropped. It's also possible to do it on much bigger scales. However, nobody bothers because people have better things to do. So for something to count as cheaper and scalable, does it need to actually be scaled up, or is it enough that we could do it if we wanted to? If the latter, then alchemy isn't even an example of the sort of thing you want. If the former, then there are tons of examples, examples all over the place!

Also technically Alchemy will in fact be cheaply scaled in the future, probably. When we are disassembling entire stars to fund galaxy-wide megaprojects, presumably some amount of alchemy will be done as well, and that amount will be many orders of magnitude bigger than the original alchemists imagined, and it will be done many orders of magnitude more cheaply (in 2021 dollars, after adjusting for inflation) as well. EDIT: Nevermind I no longer endorse this comment, I think I was assuming alignment success for some reason.

For social technology, I think we have been consistently disappointed by various attempts to reform education. Specifically, think about interventions like direct instruction investigated under the Follow Through project and, maybe, intervention tested by Gates Foundation.

In a somewhat similar vein, it would be great to have a centralized database for medical records, at least within each country. And we know how to do this technically. But it "somehow doesn't happen" (at least anywhere I know of).

A general pattern would be "things where somebody believes a problem is of a technical nature, works hard at it, and solves it, only to realize that the problem was of a social/political nature". (Relatedly, the solution might not catch on because the institution you are trying to improve serves a somewhat different purpose from what you believed, Elephant in the Brain style. EG, education being not just for improving thinking and knowledge but also for domestication and signalling.)

Sorted by Click to highlight new comments since:

Really interesting question.

From my perspective, technically, Google Wave qualifies with the words you’ve written, but I don’t think it’s in the spirit of what you’ve written. (“Cheap” makes me think you’re looking for physical-world inventions, which is probably worth being more explicit about.)

If I’m wrong and it does qualify, there’s a number of web app examples.

More from Linch
Curated and popular this week
Relevant opportunities