No, sorry. Here's a copy-paste though.
Yet another post about solar! This time about land use.
— TL;DR
Suppose that you handle low solar generation winter by just building 3-6x more panels than you need in summer and wasting all the extra power.
1. The price of the required land is about 0.1 cents per kWh (2% of current electricity prices).
2. Despite the cost being low, the absolute amounts of land used are quite large. Replacing all US energy requires 8% of our land, for Japan 30%. This seems reasonably likely to be a political obstacle.
I’m not too confident in any of these numbers, corrections welcome.
— Background
I’ve been wondering about the price of an all-solar grid without any novel storage or firm generation. In my first post I proposed having enough batteries for 1-2 days, and said that buying that many batteries seemed affordable (https://www.facebook.com/paulfchristiano/posts/10226561810329293). In the second I argued that emergency natural gas you never actually use looked like it was totally affordable (https://www.facebook.com/paulfchristiano/posts/10226568532377340).
A potential drawback of the all solar plan is that you *massively* overbuild panels so that you have enough generation in the winter months. This isn’t too expensive because most of your capital cost was storage anyway. But it does mean you use a boatload of land. I wanted to understand that better. See the TL;DR above for my conclusions.
After this post, I think the biggest unresolved question for me is how variable cloud cover is during the winter—I know that large solar installations are pretty consistent at the scale of months (and can fall back to emergency natural gas in the rare cases where they aren’t). But is it the case that e.g. there is frequently a bad 4-day stretch in January where the average solar generation across Japan is significantly reduced?
My second biggest question is about the feasibility and cost of large-scale transmission, both to smooth out that kind of short-term variability and to supply power further north.
— A note on location
The feasibility of this depends a ton on where you are. I’m going to start by talking about the largest US solar farms in the southwest. I believe the situation gets about 2x worse if you move to the US northeast or northern Europe.
If you go further north it gets even more miserable---wintertime solar is much more sensitive to latitude than summer solar. I'd guess that people in the US northeast should already be importing power from sunnier places, to say nothing of Canada. I don’t know how politically realistic that is. If you didn’t have backup natural gas it sounds insane, but if everyone is just building backup natural gas anyway I think it might be OK.
— Efficiency of solar
I looked up the Topaz solar farm (info taken from wikipedia: https://en.wikipedia.org/wiki/Topaz_Solar_Farm).
Setting aside its first year while panels were still be installed, its worst month was December of 2016 were it generated about 57 million kWh.
The “overbuild panels” plan requires us to build enough panels that we’d be OK even in the deepest winter. If we pessimistically assume that all of the excess power is completely wasted, that means you get about 684 million kWh per year.
The site area is 7.3 square miles. So in total we are getting about 94 million kWh per square mile per year. (Or 145 thousand kWh per acre).
I got almost identical numbers for McCoy solar installation.
I think you could push the numbers somewhat higher, perhaps a factor of 2, by economizing more on land (check out that picture of Topaz solar farm from space, tons of room to improve density), improving panel efficiency (once panel costs are no longer a major expense you can focus on efficiency rather than price), and focusing on winter generation. When I did this calculation on paper I got numbers 2-4 higher than the practical ones.
I’m going to just round the number up to 100 million kWh to make things simple. In reality you’d probably increase density above this but may also be pushed to use worse sites, so this seems fine for the headline figures.
— How much land is needed in the US?
In 2020 the US used about 100 quadrillion BTUs of power (mostly oil and natural gas), a bit less than 3e13 kWh: https://www.eia.gov/energyexplained/us-energy-facts.
If we pretend it was always midwinter, this would require 300,000 square miles. This is about 8% of all the land in the US.
To help understand what this means, this site gives us the total breakdown of US land. I don’t trust it totally but I think it’s roughly right. https://www.visualcapitalist.com/america-land-use/
* 842,000 square miles of forest
* 750,000 square miles of shrub
* 530,000 square miles of farmland
* 530,000 square miles of grassland (I assume this breakdown was just made up?)
* 400,000 square miles of other nature
* 63,000 square miles of cities
— How expensive is that land?
Suppose that we put solar farms on cropland. The cost of 1 acre of farmland in the US is about $3000. Renting an acre of unirrigated land is about $140/year. (https://www.nass.usda.gov/.../land-values-cash-rents.pdf)
Pasture is quite a lot cheaper than that, and you’d only have to use ~50% of the US pasture to put in all this solar. So I think $140/acre/year is pretty conservative.
Above we estimated that an acre generated 145,000 kWh per year.
So even if you are renting farmland, and *throwing away all power above the amount generated in midwinter*, the price is only a tenth of a cent per kWh. That’s about 50x lower than the current price of power. So it won’t be a large part of the price until you are dropping electricity costs by 10x or more.
— What about Japan?
Japan uses about 386 million tons of oil equivalent per year, or 4.5e12 kWh. By the same calculation that would require about 45,000 square miles. (I think Japan has fewer good solar sites than the southwest US, so they’ll be leaning more on the hope that you can squeeze more density out of installations).
The area of Japan is about 145,000 square miles. So this is about 30% of the total area. Right now in Japan I believe essentially all of this would have to come from clearing forest. The cost of clearing that land isn’t significant (and it’s not any more expensive than cropland), but I expect people would be unhappy about losing 1/3 of their forest.
— Other thoughts
These proposals involving wasting 65-85% of all the generation. If you are able to use more electricity on summer days, that helps a lot, as discussed in previous posts. The most obvious way this happens is if you can synthesize fuel, and energy costs of synthesis are dominant rather than capital costs. That would be a game-changer for the all-solar grid (as well as removing the need to electrify all your cars and planes).
I’ve ignored increasing energy usage. That seems kind of reasonable because I’ve discussed the US and Japan, two countries with relatively high energy use that has been declining in recent years. But big increases in energy use would change the picture.
In the long run it does seem like floating solar over the ocean could be quite important. But I have no idea how to think about the costs for that, and especially energy transport.
Depending on the design of your panels, putting down this many could change significantly heat the earth just by absorbing sunlight. This is on the same order of magnitude as the heat generated by running appliances (e.g. the heat generated by the engine of your car and the friction of your wheels against pavement), but if your panel is 20% efficient then I think it probably ends up about 2-3x bigger. I don’t normally think about e.g. space heaters contributing to global warming by literally heating up the house. It does seem like a consideration but I’d like to better understand how it compares.
If clearing forests or pasture, it seems important not to release all that carbon into the atmosphere. My guess would have been that most of this land would be at rough equilibrium and so this isn’t going to have a CO2 effect (assuming you don’t burn the biomass or let it rot), but I’d be interested to know, and am not sure if that’s feasible.
This does require prices going down. I think prices in many domains have gone up (a lot) over the last few years, so it doesn't seem like a lot of evidence about technological progress for solar panels. (Though some people might take it as a warning shot for long-running decay that would interfere with a wide variety of optimistic projections from the past.)
I think it's not clear whether non-technological factors get cheaper or more expensive at larger scales. Seems to me like "expected cost is below current electricity costs" is a reasonable guess, but ">75% chance of being economically feasible" is not.
My current understanding is that there are plenty of the relevant minerals (and in many cases there is a lot of flexibility about exactly what to use), and so this seems unlikely to be a major driver of cost over the very long term even if short-term supply is relatively inelastic. (Wasn't this the conclusion last time we had a thread on this?)
I wrote a series of posts on the feasibility of an all-solar grid last year, here (it links to two prior posts).
Overall my tentative conclusion was:
It was interesting to me that "political feasibility" and "economic feasibility" seemed to come apart so strongly in this case.
Not sure if all of that is right, but overall it significantly changed my sense of the economics and real obstacles to renewable power.
Regarding susceptibility to s-risk:
When we eventually told the cash arm participants that we had given other households assets of the same value, most said they would have preferred the assets, “We don’t have good products to buy here”. We had also originally planned to work in 2 countries but ended up working in just 1, freeing up enough budget to pay for cash.
I'm intuitively drawn to cash transfer arms, but "just ask the participants what they would want" also sounds very compelling for basically the same reasons. Ideally you could do that both before and after ("would you recommend other families take the cash or the asset?")
Have you done or seen systematic analysis along these lines? How do you feel about that idea?
Asking about the comparison to cash also seems like a reasonable way to do the comparison even if you were running both arms (i.e. you could ask both groups whether they'd prefer $X or asset Y, and get some correction for biases to prefer/disprefer the option they actually received).
Maybe direct comparison surveys also give you a bit more hope of handling timing issues, depending on how biased you think participants are by recency effects. If you give someone an asset that pays off over multiple years, I do expect their "cash vs asset" answers to change over time. But still people can easily imagine getting the cash now and so if nothing else it seems like a strong sanity check if you ask asset-recipients in 2 years and confirm they prefer the asset.
At a very basic intuitive level, hearing "participants indicated strong preference for receiving our assets to receiving twice as much cash" feels more persuasive than comparing some measured outcome between the two groups (at least for this kind of asset transfer program where it seems reasonable to defer to participants about what they need/want)
Compared to MIRI: We are trying to align AI systems trained using techniques like modern machine learning. We're looking for solutions that are (i) competitive, i.e. don't make the resulting AI systems much weaker, (ii) work no matter how far we scale up ML, (iii) work for any plausible situation we can think of, i.e. don't require empirical assumptions about what kind of thing ML systems end up learning. This forces us to confront many of the same issues at MIRI, though we are doing so in a very different style that you might describe as "algorithm-first" rather than "understanding-first." You can read a bit about our methodology in "My research methodology" or this section of our ELK writeup.
I think that most researchers at MIRI don't think that this goal is achievable, at least not without some kind of philosophical breakthrough. We don't have the same intuition (perhaps we're 50-50). Some of the reasons: it looks to us like there are a bunch of possible approaches for making progress, there aren't really any clear articulations of fundamental obstacles that will cause those approaches to fail, and there is extremely little existing work pursuing plausible worst-case algorithms. Right now it mostly seems like people just have varying intuitions, but searching for a worst-case approach seems like it's a good deal as long as there's a reasonable chance it's possible. (And if we fail we expect to learn something about why.)
Compared to everyone else: We think of a lot of possible algorithms, but we can virtually always rule it out without doing any experiments. That means we are almost always doing theoretical research with pen and paper. It's not obvious whether a given algorithm works in practice, but it usually is obvious that there exist plausible situations where it wouldn't work, and we are searching (optimistically) for something that works in every plausible situation.
So I'd much rather people focus on the claim that "AI will be really, really big" than "AI will be bigger than anything else which comes afterwards".
I think AI is much more likely to make this the most important century than to be "bigger than anything else which comes afterwards." Analogously, the 1000 years after the IR are likely to be the most important millennium even though it seems basically arbitrary whether you say the IR is more or less important than AI or the agricultural revolution. In all those cases, the relevant thing is that a significant fraction of all remaining growth and technological change is likely to occur in the period, and many important events are driven by growth or tech change.
The answer to this question could change our estimate of P(this is the most important century) by an order of magnitude
I think it's more likely than not that there will be future revolutions as important TAI, but there's a good probability that AI leads to enough acceleration that a large fraction of future revolutions occur in the same century. There's room for the debate over the exact probability and timeline for such acceleration, but I think no real way to argue for anything as low as 10%.
We were previously comparing two hypotheses:
Now we're comparing three:
"Wild time" is almost as unlikely as HoH. Holden is trying to suggest it's comparably intuitively wild, and it has pretty similar anthropic / "base rate" force.
So if your arguments look solid, "All futures are wild" makes hypothesis 2 look kind of lame/improbable---it has to posit a flaw in an argument, and also that you are living at a wildly improbable time. Meanwhile, hypothesis 1 merely has to posit a flaw in an argument, and hypothesis 3 merely has to a posit HoH (which is only somewhat more to swallow than a wild time).
So now if you are looking for errors, you probably want to focus for errors in the argument that we are living at a "wild time." Realistically, I think you probably need to reject the possibility that the stars are real and that it is possible for humanity to spread to them. In particular, it's not too helpful to e.g. be skeptical of some claim about AI timelines or about our ability to influence society's trajectory.
This is kind of philosophically muddled because (I think) most participants in this discussion already accept a simulation-like argument that "Most observers like us are mistaken about whether it will be possible for them to colonize the stars." If you set aside the simulation-style arguments, then I think the "all futures are wild" correction is more intuitively compelling.
(I think if you tell people "Yes, our good skeptical epistemology allows us to be pretty confident that the stars don't exist" they will have a very different reaction than if you tell them "Our good skeptical epistemology tells us that we aren't the most influential people ever.")
I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I'm a bit more skeptical though I do think it's a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn't true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don't think it's unfair to put it in the same reference class as Rodney Brooks' evaluations to the extent that his was intended as a serious evaluation.
And here's the initial post (which seems a bit less reasonable, since I'd spent less time learning about what was going on):