No, sorry. Here's a copy-paste though.
... (read more)Yet another post about solar! This time about land use.
— TL;DR
Suppose that you handle low solar generation winter by just building 3-6x more panels than you need in summer and wasting all the extra power.
1. The price of the required land is about 0.1 cents per kWh (2% of current electricity prices).
2. Despite the cost being low, the absolute amounts of land used are quite large. Replacing all US energy requires 8% of our land, for Japan 30%. This seems reasonably likely to be a political obstacle.
I’m not too confident
This does require prices going down. I think prices in many domains have gone up (a lot) over the last few years, so it doesn't seem like a lot of evidence about technological progress for solar panels. (Though some people might take it as a warning shot for long-running decay that would interfere with a wide variety of optimistic projections from the past.)
I think it's not clear whether non-technological factors get cheaper or more expensive at larger scales. Seems to me like "expected cost is below current electricity costs" is a reasonable guess, but "&... (read more)
I wrote a series of posts on the feasibility of an all-solar grid last year, here (it links to two prior posts).
Overall my tentative conclusion was:
Regarding susceptibility to s-risk:
When we eventually told the cash arm participants that we had given other households assets of the same value, most said they would have preferred the assets, “We don’t have good products to buy here”. We had also originally planned to work in 2 countries but ended up working in just 1, freeing up enough budget to pay for cash.
I'm intuitively drawn to cash transfer arms, but "just ask the participants what they would want" also sounds very compelling for basically the same reasons. Ideally you could do that both before and after ("would you recommend... (read more)
Compared to MIRI: We are trying to align AI systems trained using techniques like modern machine learning. We're looking for solutions that are (i) competitive, i.e. don't make the resulting AI systems much weaker, (ii) work no matter how far we scale up ML, (iii) work for any plausible situation we can think of, i.e. don't require empirical assumptions about what kind of thing ML systems end up learning. This forces us to confront many of the same issues at MIRI, though we are doing so in a very different style that you might describe as "algorithm-first"... (read more)
So I'd much rather people focus on the claim that "AI will be really, really big" than "AI will be bigger than anything else which comes afterwards".
I think AI is much more likely to make this the most important century than to be "bigger than anything else which comes afterwards." Analogously, the 1000 years after the IR are likely to be the most important millennium even though it seems basically arbitrary whether you say the IR is more or less important than AI or the agricultural revolution. In all those cases, the relevant thing is that a significant ... (read more)
We were previously comparing two hypotheses:
Now we're comparing three:
"Wild time" is almost as unlikely as HoH. Holden is trying to suggest it's comparably intuitively wild, and it has pretty similar anthropic / "base rate" force.
So if your arguments look solid, "All futures are wild" makes hypothesis 2 look kind of lame/improbable---it has to posit a flaw in an argument, and also that you are living at a wildly improb... (read more)
I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I'm a bit more skeptical though I do think it's a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn't true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don't think it's unfair to put it in the same reference class as Rodney Brooks' evaluations to the extent that his was intended as a serious evaluation.
The Nick Bostrom quote (from here) is:
In retrospect we know that the AI project couldn't possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence.
I would have guessed this is just a funny quip, in the sense that (i) it sure sounds like it's just a throw-away quip, no e... (read more)
Ironically, although cost-benefit analysts generally ignore the diminishing marginal benefit of money when they are aggregating value across people at a single date, their main case for discounting future commodities is founded on this diminishing marginal benefit.
I think the "main" (i.e. econ 101) case for time discounting (for all policy decisions other than determining savings rates) is roughly the one given by Robin here.
I don't think there is a big incongruity here. Questions about diminishing returns to wealth become relevant when trying ... (read more)
For governments who have the option to tax, WTP has obvious relevance as a way of comparing a policy to a benchmark of taxation+redistribution. I tentatively think that an idealized state (representing any kind of combination of its constituents' interests) ought to use a WTP analysis for almost all of its policy decisions. I wrote some opinionated thoughts here.
It's less clear if this is relevant for a realistic, state and the discussion becomes more complex. I think it depends on a question like "what is the role of cost-effectiveness analysis in context... (read more)
A 5% probability of disaster isn't any more or less confident/extreme/radical than a 95% probability of disaster; in both cases you're sticking your neck out to make a very confident prediction.
"X happens" and "X doesn't happen" are not symmetrical once I know that X is a specific event. Most things at the level of specificity of "humans build an AI that outmaneuvers humans to permanently disempower them" just don't happen.
The reason we are even entertaining this scenario is because of a special argument that it seems very plausible. If that's all you've g... (read more)
Is your impression that if customers were willing to pay for it, then that wouldn't be sufficient cause to say that it benefited customers? (Does that mean that e.g. a standard ensuring that children's food doesn't cause discomfort also can't be protected, since it benefits customers' kids rather than customers themselves?)
These cases are also interesting for alignment agreements between AI labs, and it's interesting to see it playing out in practice. Cullen wrote about this here much better than I will.
Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.
On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurt... (read more)
I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving
Strong +1.
If I won a donor lottery, I would consider myself to have no obligation whatsoever towards the other lottery participants, and I think many other lottery participants feel the same way. So it's potentially quite bad if some participants are thinking of me as an "allocator" of their money. To the extent there is ambiguity in the current setup, it seems important to try to eliminate that.
The relevant section is VII. Summarizing the six empirical tests:
I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.
To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."
Thanks for outlining the tests.
I'm not really sure what he thinks the probability of the singularity before 2100 is. My reading was that he probably doesn't think that given his tests, the singularity is (eg) >10% likely before 2100. 2 of the 7 tests suggest the singularity after 100 years and 5 of them fail. It might be worth someone asking him for his view on that
If the market can't price 30-year cashflows, it can't price anything, since for any infinitely-lived asset (eg stocks!), most of the present-discounted value of future cash flows is far in the future.
If an asset pays me far in the future, then long-term interest rates are one factor affecting its price. But it seems to me that in most cases that factor still explains a minority of variation in prices (and because it's a slowly-varying factor it's quite hard to make money by predicting it).
For example, there is a ton of uncertainty about how muc... (read more)
I I think the market just doesn't put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it's a very small part of my AI-boom portfolio.)
I think it's very hard for the market to get 30-year debt prices right because the time horizons are so long and they depend on super hard empirical questions with ~0 feedback. Prices are also determined by supply and demand across a truly huge number of traders, and making this trade locks up y... (read more)
Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance---I think the most important argument for me is the analogy to computers.
Scaling down all the amounts of time, here's how that situation sounds to me: US output doubles in 15 years (basically the fastest it ever has), then doubles again in 7 years. The end of the 7 year doubling is the first time that your hypothetical observer would say "OK yeah maybe we are transitioning to a new faster growth mode," and stuff started getting clearly crazy during the 7 year doubling. That scenario wouldn't be surprising to me. If that scenario sounds typical to you then it's not clear there's anything we really disagree about.
... (read more)Moreover, it see
Some thoughts on the historical analogy:
If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers
I think European GDP was already pretty crazy by 1700. There's been a lot of recent arguing about the particular numbers and I am definitely open to just being wrong about this, but so ... (read more)
I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn't currently seem like thinking or working on this issue should be a priority for me (eve... (read more)
I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people's minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just try... (read more)
My process was to check the "About the forum" link on the left hand side, see that there was a section on "What we discourage" that made no mention of hiring, then search for a few job ads posted on the forum and check that no disapproval was expressed in the comments of those posts.
I think that a scaled up version of GPT-3 can be directly applied to problems like "Here's a situation. Here's the desired result. What action will achieve that result?" (E.g. you can already use it to get answers like "What copy will get the user to subscribe to our newsletter?" and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)
I think that if GPT-3 was more powerful then many people would apply it to problems like that. I'm conc... (read more)
Hires would need to be able to move to the US.
No, I'm talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is "trying" to do what we want. We are a relatively focused technical team, and a minority of the organization's investment in safety and preparedness.
The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and pub... (read more)
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don't recall all that was said, but I think a large part of my argument was that "jumping ship" or being forced off for ideological reasons was not "fine" when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I'm not sure if this changed Paul's mind.
Thanks, super helpful.
(I don't really buy an overall take like "It seems unlikely" but it doesn't feel that mysterious to me where the difference in take comes from. From the super zoomed out perspective 1200 AD is just yesterday from 1700AD, it seems like random fluctuations over 500 years are super normal and so my money would still be on "in 500 years there's a good chance that China would have again been innovating and growing rapidly, and if not then in another 500 years it's reasonably likely..." It makes sense to describe that situation as "nowhere close to IR" though. And it does sound like the super fast growth is a blip.)
I took numbers from Wikipedia but have seen different numbers that seem to tell the same story although their quantitative estimates disagree a ton.
The first two numbers are all higher than growth rates could have plausibly been in a sustained way during any previous part of history (and the 0-1000AD one probably is as well), and they se... (read more)
If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.
If you are trying to model things at the level that Roodman or I are, the difference between 1400 and 1600 just isn't a big deal, the noise terms are on the order of 500 years at that point.
So maybe the interesting question is if and why scholars think that China wouldn't have had an IR shortly after Europe (i.e. within a few cen... (read more)
If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.
I believe the population surge is closely related to the European population surge: it's largely attributed to the Colombian exchange + expanded markets/trade. One of the biggest things is that there's an expansion in the land under cultivation, since potatoes and maize can be grown on marginal land that wouldn't otherwise work well for rice... (read more)
My model is that most industries start with fast s-curve like growth, then plateau, then often decline
I don't know exactly what this means, but it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries. The obvious examples to me are semiconductors and AI since I deal most with those. But it also seems true of e.g. manufacturing, agricultural productivity, batteries, construction costs. It seems like industries where the productivity vs time curve is a &... (read more)
it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries
This agrees with my impression. Just in case someone is looking for references for this, see e.g.:
It feels like you are drawing some distinction between "contingent and complicated" and "noise." Here are some possible distinctions that seem relevant to me but don't actually seem like disagreements between us:
I think Roodman's model implies a standard deviation of around 500-1000 years for IR timing starting from 1000AD, but I haven't checked. In general for models of this type it seems like the expected time to singularity is a small multiple of the current doubling time, with noise also being on the order of the doubling time.
The model clearly underestimates correlations and hence the variance here---regardless of whether we go in for "2 revolutions" or "randomly spread out" we can all agree that a stagnant doubling is more likel... (read more)
I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pa... (read more)
because I have a bunch of very concrete, reasonably compelling sounding stories of specific things that caused the relevant shifts
Be careful that you don't have too many stories, or it starts to get continuous again.
More seriously, I don't know what the small # of factors are for the industrial revolution, and my current sense is that the story can only seem simple for the agricultural revolution because we are so far away and ignoring almost all the details.
It seems like the only factor that looks a priori like it should cause a discontinuity is... (read more)
I mean something much more basic. If you have more parameters then you need to have uncertainty about every parameter. So you can't just look at how well the best "3 exponentials" hypothesis fits the data, you need to adjust for the fact that this particular "3 exponentials" model has lower prior probability. That is, even if you thought "3 exponentials" was a priori equally likely to a model with fewer parameters, every particular instance of 3 exponentials needs to be less probable than every particular model with fewer parameters.
Thanks, this was a usef... (read more)
This would be an important update for me, so I'm excited to see people looking into it and to spend more time thinking about it myself.
High-level summary of my current take on your document:
I feel really confused what the actual right priors here are supposed to be. I find the "but X has fewer parameters" argument only mildly compelling, because I feel like other evidence about similar systems that we've observed should easily give us enough evidence to overcome the difference in complexity.
This does mean that a lot of my overall judgement on this question relies on the empirical evidence we have about similar systems, and the concrete gears-level models I have for what has caused growth. AI Impact's work on discontinuous vs. continuous... (read more)
This is one of my favorite comments on the Forum. Thanks for the thorough response.
This is only 2.4 standard deviations assuming returns follow a normal distribution, which they don't.
No, 2.4 standard deviations is 2.4 standard deviations.
It's possible to have distributions for which what's more or less surprising.
For a normal distribution, this happens about one every 200 periods. I totally agree that this isn't a factor of 200 evidence against your view. So maybe saying "falsifies" was too strong.
But no distribution is 2.35 standard deviations below its mean with probability more than 18%. That's lite... (read more)
I haven't done a deep dive on this but I think futures are better than this analysis makes them look.
Suppose that I'm in the top bracket and pay 23% taxes on futures, and that my ideal position is 2x SPY.
In a tax-free account I could buy SPY and 1x SPY futures, to get (2x SPY - 1x interest).
In a taxable account I can buy 1x SPY and 1.3x SPY futures. Then my after-tax expected return is again (2x SPY - 1x interest).
The catch is that if I lose money, some of my wealth will take the form of taxable losses that I can use to offset gains in future yea... (read more)
I'm surprised by (and suspicious of) the claim about so many more international shares being non-tradeable, but it would change my view.
I would guess the savings rate thing is relatively small compared to the fact that a much larger fraction of US GDP is inevestable in the stock market---the US is 20-25% of GDP, but the US is 40% of total stock market capitalization and I think US corporate profits are also ballpark 40% of all publicly traded corporate profits. So if everyone saved the same amount and invested in their home country, US equities would ... (read more)
I also like GMP, and find the paper kind of surprising. I checked the endpoints stuff a bit and it seems like it can explain a small effect but not a huge one. My best guess is that going from equities to GMP is worth like +1-2% risk-free returns.
I like the basic point about leverage and think it's quite robust.
But I think the projected returns for VMOT+MF are insane. And as a result the 8x leverage recommendation is insane, someone who does that is definitely just going to go broke. (This is similar to Carl's complaint.)
My biggest problem with this estimate is that it kind of sounds crazy and I don't know very good evidence in favor. But it seems like these claimed returns are so high that you can also basically falsify them by looking at the data between when VMOT was founded and w... (read more)
We could account for this by treating mean return and standard deviation as distributions rather than point estimates, and calculating utility-maximizing leverage across the distribution instead of at a single point. This raises a further concern that we don’t even know what distribution the mean and standard deviation have, but at least this gets us closer to an accurate model.
Why not just take the actual mean and standard deviation, averaging across the whole distribution of models?
What exactly is the "mean" you are quoting, if it's not your subjective expectation of returns?
(Also, I think the costs of choosing leverage wrong are pretty symmetric.)
My understanding is that the sharpe ratio of the global portfolio is quite similar to the equity portfolio (e.g. see here for data on the period from 1960-2017, finding 0.36 for the global market and 0.37 for equities).
I still do expect the broad market to outperform equities alone, but I don't know where the super-high estimates for the benefits of diversification are coming from, and I expect the effect to be much more modest then the one described in the linked post by Ben Todd. Do you know what's up with the discrepancy? It could be about cho... (read more)
To use leverage, you will probably end up having to pay about 1% on top of short-term interest rates
Not a huge deal, but it seems like the typical overhead is about 0.3%:
I think it's pretty dangerous to reason "asset X has outperformed recently, so I expect it to outperform in the future." An asset can outperform because it's becoming more expensive, which I think is partly the case here.
This is most obvious in the case of bonds---if 30-year bonds from A are yielding 2%/year and then fall to 1.5%/year over a decade, while 30-year bonds from B are yielding 2%/year and stay at 2%/year, then it will look like the bonds from A are performing about twice as well over the decade. But this is a very bad reason... (read more)
My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.
If our plan for building AI depends on having clarity about our values, then it's important to achieve such clarity before we build AI---whether that's clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.
I agree consciousness is a big ? in our axiology, though it's not clear if ... (read more)
I don't think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.
If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we'd regard as good (even though it likely contains very few minds that resemble either us or them).
If AI systems are conscious but not at all aligned with ... (read more)
And here's the initial post (which seems a bit less reasonable, since I'd spent less time learning about what was going on):
... (read more)