All of Paul_Christiano's Comments + Replies

Updating on Nuclear Power

And here's the initial post (which seems a bit less reasonable, since I'd spent less time learning about what was going on):

Given current trends in technology and policy, solar panels seem like the easiest way to make clean electricity (and soon the easiest way to make energy at all). I’m interested in thinking/learning about what a 100% solar grid would look like.

Here are my own guesses.

(I could easily imagine this being totally wrong because I’m a layperson who has only spent a little while looking into this. I’m not going to have “I think caveats” in fr

... (read more)
Updating on Nuclear Power

No, sorry. Here's a copy-paste though.

Yet another post about solar! This time about land use.

— TL;DR

Suppose that you handle low solar generation winter by just building 3-6x more panels than you need in summer and wasting all the extra power.

1. The price of the required land is about 0.1 cents per kWh (2% of current electricity prices).

2. Despite the cost being low, the absolute amounts of land used are quite large. Replacing all US energy requires 8% of our land, for Japan 30%. This seems reasonably likely to be a political obstacle.

I’m not too confident

... (read more)
4Paul_Christiano2mo
And here's the initial post (which seems a bit less reasonable, since I'd spent less time learning about what was going on):
Updating on Nuclear Power

This does require prices going down. I think prices in many domains have gone up (a lot) over the last few years, so it doesn't seem like a lot of evidence about technological progress for solar panels. (Though some people might take it as a warning shot for long-running decay that would interfere with a wide variety of optimistic projections from the past.)

I think it's not clear whether non-technological factors get cheaper or more expensive at larger scales. Seems to me like "expected cost is below current electricity costs" is a reasonable guess, but "&... (read more)

Updating on Nuclear Power

I wrote a series of posts on the feasibility of an all-solar grid last year, here (it links to two prior posts).

Overall my tentative conclusion was:

  • It's economically feasible to go all solar without firm generation, at least in places at the latitude of the US (further north it becomes impossible, you'd need to import power).
  • The price of the land required for all-solar production seems very small.
  • However, the absolute amount of land required is nonetheless quite large. In the US building enough solar to supply all energy needs through a cloudy winter would
... (read more)
1rileyharris2mo
Thanks, it looks like you've put a lot of effort into summarising this information (it actually looks better and higher effort than my original post, oop).
5Wei_Dai2mo
How much does this depend on the costs of solar+storage continuing to fall? (In one of your FB posts you wrote "Given 10-20 years and moderate progress on solar+storage I think it probably makes sense to use solar power for everything other than space heating") Because I believe since you wrote the FB posts, these prices have been going up instead. See this [https://www.theedgemarkets.com/article/most-2022s-solar-pv-projects-risk-delay-or-cancellation-due-soaring-costs-says-rystad] or this [https://www.utilitydive.com/news/solar-growth-new-installations-slowing-in-the-face-of-18-price-increases/620297/] . Covering 8% of the US or 30% of Japan (eventually 8-30% of all land on Earth?) with solar panels would take a huge amount of raw materials, and mining has obvious diseconomies at this kind of scale (costs increase as the lowest cost mineral deposits are used up), so it seems premature to conclude "economically feasible" without some investigation into this aspect of the problem.
1dumont2mo
Is there a non-FB version of these posts?
Is AI safety still neglected?

Regarding susceptibility to s-risk:

  • If you keep humans around, they can decide on how to respond to threats and gradually improve their policies as they figure out more (or their AIs figure out more).
  • If you build incorrigible AIs who will override human preferences (so that a threatened human has no ability to change the behavior of their AI), while themselves being resistant to threats, then you may indeed reduce the likelihood of threats being carried out.
  • But in practice all the value is coming from you solving "how do we deal with threats?" at the same t
... (read more)
Against cash benchmarking for global development RCTs

When we eventually told the cash arm participants that we had given other households assets of the same value, most said they would have preferred the assets, “We don’t have good products to buy here”. We had also originally planned to work in 2 countries but ended up working in just 1, freeing up enough budget to pay for cash. 

I'm intuitively drawn to cash transfer arms, but "just ask the participants what they would want" also sounds very compelling for basically the same reasons. Ideally you could do that both before and after ("would you recommend... (read more)

9Rory Fenton3mo
I really like the idea of asking people what assets they would like. We did do a version of this to determine what products to offer, using qualitative interviews where people ranked ~30 products in order of preference. This caused us to add more chickens and only offer maize inputs to people who already grew maize. But participants had to choose from a narrow list of products (those with RCT evidence that we could procure), I'd love have given them freedom to suggest anything. We did also consider letting households determine which products they received within a fixed budget (rather than every household getting the same thing) but the logistics got too difficult. Interestingly, people had zero interest in deworming pills, oral hydration salts or Vitamin A supplements as they not were aware of needing them-- I could see tensions arising between households not valuing these kinds of products and donors wanting to give them based on cost-effectiveness models. This "what do you want" approach might work best with products that recipients already have reasonably accurate mental models of, or that can be easily and accurately explained. Very interesting suggestion: we did try something like this but didn't consider it as an outcome measure and so didn't put proper thought/resources into it. We asked people, "How much would you be willing to pay for product X?", with the goal of saying something like "Participants valued our $120 bundle at $200" but unfortunately the question generally caused confusion: participants would think we were asking them to pay for the product they'd received for free and either understandably got upset or just tried lowballing us with their answer, expecting it to be a negotiation. If we had thought of it in advance, perhaps this would have worked as a way to generate real value estimates: * We randomise participants into groups * The first group is offered either our bundle (worth $120) or $120 cash * If >50% take the bundle, we then a
ARC is hiring alignment theory researchers

Compared to MIRI: We are trying to align AI systems trained using techniques like modern machine learning. We're looking for solutions that are (i) competitive, i.e. don't make the resulting AI systems much weaker, (ii) work no matter how far we scale up ML, (iii) work for any plausible situation we can think of, i.e. don't require empirical assumptions about what kind of thing ML systems end up learning. This forces us to confront many of the same issues at MIRI, though we are doing so in a very different style that you might describe as "algorithm-first"... (read more)

Forecasting transformative AI: what's the burden of proof?

So I'd much rather people focus on the claim that "AI will be really, really big" than "AI will be bigger than anything else which comes afterwards".

I think AI is much more likely to make this the most important century than to be "bigger than anything else which comes afterwards." Analogously, the 1000 years after the IR are likely to be the most important millennium even though it seems basically arbitrary whether you say the IR is more or less important than AI or the agricultural revolution. In all those cases, the relevant thing is that a significant ... (read more)

All Possible Views About Humanity's Future Are Wild

We were previously comparing two hypotheses:

  1. HoH-argument is mistaken
  2. Living at HoH

Now we're comparing three:

  1. "Wild times"-argument is mistaken
  2. Living at a wild time, but HoH-argument is mistaken
  3. Living at HoH

"Wild time" is almost as unlikely as HoH. Holden is trying to suggest it's comparably intuitively wild, and it has pretty similar anthropic / "base rate" force.

So if your arguments look solid,  "All futures are wild" makes hypothesis 2 look kind of lame/improbable---it has to posit a flaw in an argument, and also that you are living at a wildly improb... (read more)

7Ardenlk1y
Am I right in thinking Paul your argument here is very similar to Buck's in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential [https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential] . Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.
5Ben Garfinkel1y
Thanks for the clarification! I still feel a bit fuzzy on this line of thought, but hopefully understand a bit better now. At least on my read, the post seems to discuss a couple different forms of wildness: let’s call them “temporal wildness” (we currently live at an unusually notable time) and “structural wildness” (the world is intuitively wild; the human trajectory is intuitively wild).[1] [#fn-EngtwxwMy34wh6aus-1] I think I still don’t see the relevance of “structural wildness,” for evaluating fishiness arguments. As a silly example: Quantum mechanics is pretty intuitively wild, but the fact that we live in a world where QM is true doesn’t seem to substantially undermine fishiness arguments. I think I do see, though, how claims about temporal wildness might be relevant. I wonder if this kind of argument feels approximately right to you (or to Holden): Fishiness arguments can obviously still be applied to the hypothesis presented in Step 1, in the usual way. But maybe the difference, here, is that the standard arguments/evidence that lend credibility to the more conservative hypothesis “The HoH will happen within the next 10000” are just pretty obviously robust — which makes it easier to overcome a low prior. Then, once we’ve established the plausibility of the more conservative hypothesis, we can sort of back-chain and use it to bump up our prior in the Strong HoH Hypothesis. -------------------------------------------------------------------------------- 1. I suppose it also evokes an epistemic notion of wildness, when it describes certain confidence levels as “wild,” but I take it that “wild” here is mostly just a way of saying “irrational”? ↩︎ [#fnref-EngtwxwMy34wh6aus-1]
Taboo "Outside View"

I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I'm a bit more skeptical though I do think it's a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn't true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don't think it's unfair to put it in the same reference class as Rodney Brooks' evaluations to the extent that his was intended as a serious evaluation.

3abergal1y
Yeah, FWIW I haven't found any recent claims about insect comparisons particularly rigorous.
Taboo "Outside View"

The Nick Bostrom quote (from here) is:

In retrospect we know that the AI project couldn't possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence.

I would have guessed this is just a funny quip, in the sense that (i) it sure sounds like it's just a throw-away quip, no e... (read more)

7Ben Garfinkel1y
I think that's good push-back and a fair suggestion: I'm not sure how seriously the statement in Nick's paper was meant to be taken. I hadn't considered that it might be almost entirely a quip. (I may ask him about this.) Moravec's discussion in Mind Children is similarly brief: He presents a graph [https://drive.google.com/file/d/1JdZTDwmIkwsG0hHM6-DKVdkbRchjg6kq/view?usp=sharing] of the computing power of different animal's brains and states that "lab computers are roughly equal in power to the nervous systems of insects."He also characterizes current AI behaviors as "insectlike" and writes: "I believe that robots with human intelligence will be common within fifty years. By comparison, the best of today's machines have minds more like those of insects than humans. Yet this performance itself represents a giant leap forward in just a few decades." I don't think he's just being quippy, but there's also no suggestion that he means anything very rigorous/specific by his suggestion. Rodney Brooks, I think, did mean for his comparisons to insect intelligence to be taken very seriously. The idea of his "nouvelle AI program [https://en.wikipedia.org/wiki/Nouvelle_AI]" was to create AI systems that match insect intelligence, then use that as a jumping-off point for trying to produce human-like intelligence. I think walking and obstacle navigation, with several legs, was used as the main dimension of comparison. The Brooks case is a little different, though, since (IIRC) he only claimed that his robots exhibited important aspects of insect intelligence or fell just short insect intelligence, rather than directly claiming that they actually matched insect intelligence. On the other hand, he apparently felt he had gotten close enough to transition to the stage of the project [https://www.wired.com/1994/12/cog/] that was meant to go from insect-level stuff to human-level stuff. A plausible reaction to these cases, then, might be: I think there's something to this reactio
Issues with Using Willingness-to-Pay as a Primary Tool for Welfare Analysis

Ironically, although cost-benefit analysts generally ignore the diminishing marginal benefit of money when they are aggregating value across people at a single date, their main case for discounting future commodities is founded on this diminishing marginal benefit. 

I think the "main" (i.e. econ 101) case for time discounting (for all policy decisions other than determining savings rates) is roughly the one given by Robin here

I don't think there is a big incongruity here. Questions about diminishing returns to wealth become relevant when trying ... (read more)

Issues with Using Willingness-to-Pay as a Primary Tool for Welfare Analysis

For governments who have the option to tax, WTP has obvious relevance as a way of comparing a policy to a benchmark of taxation+redistribution. I tentatively think that an idealized state (representing any kind of combination of its constituents' interests) ought to use a WTP analysis for almost all of its policy decisions. I wrote some opinionated thoughts here.

It's less clear if this is relevant for a realistic, state and the discussion becomes more complex. I think it depends on a question like "what is the role of cost-effectiveness analysis in context... (read more)

Draft report on existential risk from power-seeking AI

A 5% probability of disaster isn't any more or less confident/extreme/radical than a 95% probability of disaster; in both cases you're sticking your neck out to make a very confident prediction.

"X happens" and "X doesn't happen" are not symmetrical once I know that X is a specific event. Most things at the level of specificity of "humans build an AI that outmaneuvers humans to permanently disempower them" just don't happen.

The reason we are even entertaining this scenario is because of a special argument that it seems very plausible. If that's all you've g... (read more)

Dutch anti-trust regulator bans pro-animal welfare chicken cartel

Is your impression that if customers were willing to pay for it, then that wouldn't be sufficient cause to say that it benefited customers? (Does that mean that e.g. a standard ensuring that children's food doesn't cause discomfort also can't be protected, since it benefits customers' kids rather than customers themselves?)

3Tsunayoshi1y
No, my impression is that willingness to pay is a sufficient but not necessary condition to conclude that an industry standard benefits customers. A different sufficient condition would be an assessment of the effects of the standard by the regulators in terms of welfare. I assume that is the reason why the regulators in this case carried out an analysis of the welfare benefits, because why even do so if willingness-to-pay is the only factor? More speculatively, I would guess that Dutch regulators also take account welfare improvements to other humans , and would not strike down an industry standard for safe food (if the standard actually contributed to safety).
Dutch anti-trust regulator bans pro-animal welfare chicken cartel

These cases are also interesting for alignment agreements between AI labs, and it's interesting to see it playing out in practice. Cullen wrote about this here much better than I will.

Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.

On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurt... (read more)

9Tsunayoshi1y
I think you might have an incorrect impression of the ruling. The agreement was not just struck down because consumers seemed to not be willing to pay for it, but also because the ACM (on top (!) of the missing willingness to pay) decided that the agreement did not benefit consumers by the nature of the improvements (clearly, most of the benefit goes to the chickens). From the link: "In order to qualify for an exemption from the prohibition on cartels under the Dutch competition regime it is necessary that the benefits passed on to the consumers exceed the harm inflicted upon them under agreements."
Alternatives to donor lotteries

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving

Strong +1.

If I won a donor lottery, I would consider myself to have no obligation whatsoever towards the other lottery participants, and I think many other lottery participants feel the same way. So it's potentially quite bad if some participants are thinking of me  as an "allocator" of their money. To the extent there is ambiguity in the current setup, it seems important to try to eliminate that.

7HaydnBelfield1y
Interesting! I would feel I had been quasirandomly selected to allocate our shared pool of donations - and would definitely feel some obligation/responsibility. As evidence that other people feel the same way, I would point to the extensive research and write-ups that previously selected allocators have done. A key explanation for why they've done that is a sense of obligation/responsibility for the group.
[Link post] Are we approaching the singularity?
  1. I think that acceleration is autocorrelated---if things are accelerating rapidly at time T they are also more likely to be accelerating rapidly at time T+1. That's intuitively pretty likely, and it seems to show up pretty strongly in the data. Roodman makes no attempt to model it, in the interest of simplicity and analytical tractability. We are currently in a stagnant period, and so I think you should expect continuing stagnation. I'm not sure exactly how large the effect (and obviously it depends on the model) is but I think it's at least a 20-40 year de
... (read more)
[Link post] Are we approaching the singularity?

The relevant section is VII. Summarizing the six empirical tests:

  1. You'd expect productivity growth to accelerate as you approach the singularity, but it is slowing.
  2. The capital share should approach 100% as you approach the singularity. The share is growing, but at the slow rate of ~0.5%/year. At that rate it would take roughly 100 years to approach 100%.
  3. Capital should get very cheap as you approach the singularity. But capital costs (outside of computers) are falling relatively slowly.
  4. The total stock of capital should get large as you approach the singulari
... (read more)

I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.

To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."

Thanks for outlining the tests.

I'm not really sure what he thinks the probability of the singularity before 2100 is. My reading was that he probably doesn't think that given his tests, the singularity is (eg) >10% likely before 2100. 2 of the 7 tests suggest the singularity after 100 years and 5 of them fail. It might be worth someone asking him for his view on that

Three Impacts of Machine Intelligence

If the market can't price 30-year cashflows, it can't price anything, since for any infinitely-lived asset (eg stocks!), most of the present-discounted value of future cash flows is far in the future. 

If an asset pays me far in the future,  then long-term interest rates are one factor affecting its price. But it seems to me that in most cases that factor still explains a minority of variation in prices (and because it's a slowly-varying factor it's quite hard to make money by predicting it).

For example, there is a ton of uncertainty about how muc... (read more)

Three Impacts of Machine Intelligence

I I think the market just doesn't put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it's a very small part of my AI-boom portfolio.)

I think it's very hard for the market to get 30-year debt prices right because the time horizons are so long and they depend on super hard empirical questions with ~0 feedback. Prices are also determined by supply and demand across a truly huge number of traders, and making this trade locks up y... (read more)

1basil.halperin1y
Agreed re: "mispricing = restatement that this is a contrarian position" -- but to push back on your "lack of feedback" point: If the market can't price 30-year cashflows, it can't price anything, since for any infinitely-lived asset (eg stocks!), most of the present-discounted value of future cash flows is far in the future. See eg this Ralph Koijen thread and linked paper [https://twitter.com/rkoijen/status/1291478305510694915], "the first 10 years of dividends only make up ~20% of the value of the stock market. 80% is due to value of cash flows beyond 10 years" (I wonder how big EMH proponents like Hanson and Yudkowsky explain the dissonance.)
AGB's Shortform

Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance---I think the most important argument for me is the analogy to computers.

  • It's possible to write "Humanity survives the next billion years" as a conjunction of a billion events (humanity survives year 1, and year 2, and...). It's also possible to write "humanity goes extinct next year" as a conjunction of a billion events (Alice dies, and Bob dies, and...). Both of those are quite weak prima facie justifications for assigning high confidence. You could say
... (read more)
6AGB1y
Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets: You say you aren't anchoring, in a world where we defaulted to expressing probability in 1/10^6 units called Ms I'm just left feeling like you would write "you should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.". So if it's not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language? My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% - 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair? The rest I see as an attempt to justify the extreme confidences inside the product, and I'll have to think about more. The following are gut responses: I'm much more baseline cynical than you seem to be about people's willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, I'd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether that's correct or not, I don't think its wildly unusual among people who take climate change [https://thehill.com/business-a-lobbying/340884-poll-39-percent-think-its-likely-climate-change-will-cause-human] seriously*, and yet we almost certainly aren't doing enough to combat that as a society. This gives me little hope for dealing with
Against GDP as a metric for timelines and takeoff speeds

Scaling down all the amounts of time, here's how that situation sounds to me: US output doubles in 15 years (basically the fastest it ever has), then doubles again in 7 years. The end of the 7 year doubling is the first time that your hypothetical observer would say "OK yeah maybe we are transitioning to a new faster growth mode," and stuff started getting clearly crazy during the 7 year doubling. That scenario wouldn't be surprising to me. If that scenario sounds typical to you then it's not clear there's anything we really disagree about.

Moreover, it see

... (read more)
2kokotajlod2y
OK, thanks. I'm not sure how you calculated that but I'll take your word for it. My hypothetical observer is seeming pretty silly then -- I guess I had been thinking that the growth prior to 1700 was fast but not much faster than it had been at various times in the past, and in fact much slower than it had been in 1350 (I had discounted that, but if we don't, then that supports my point) so a hypothetical observer would be licensed to discount the growth prior to 1700 as maybe just catch-up + noise. But then by the time the data for 1700 comes in, it's clear a fundamental change has happened. I guess the modern-day parallel would be if a pandemic or economic crisis depresses growth for a bit, and then there's a sustained period of growth afterwards in which the economy doubles in 7 years, and there's all sorts of new technology involved but it's still respectable for economists to say it's just catch-up growth + noise, at least until year 5 or so of the 7-year doubling. Is this fair? There definitely wasn't 0.14% growth over 5000 years. But according to my data there was 12% in 700, 0.23% in 900, 11% in 1000 and 1100, 47% in 1350, and 21% in 1400. So 14% fits right in; 14% over a 500-year period is indeed more impressive, but not that impressive when there are multiple 100-year periods with higher growth than that worldwide(and thus presumably longer periods with higher growth, in cherry-picked locations around the world) Anyhow, the important thing is how much we disagree, and maybe it's not much. I certainly think the scenario you sketch is plausible, but I think "faster" scenarios, and scenarios with more of a disconnect between GWP and PONR, are also plausible. Thanks to you I am updating towards thinking the historical case of IR is less support for that second bit than I thought.
Against GDP as a metric for timelines and takeoff speeds

Some thoughts on the historical analogy:

If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers

I think European GDP was already pretty crazy by 1700. There's been a lot of recent arguing about the particular numbers and I am definitely open to just being wrong about this, but so ... (read more)

2kokotajlod2y
Thanks for the reply -- Yeah, I totally agree that GDP of the most advanced countries is a better metric than GWP, since presumably GDP will accelerate first in a few countries before it accelerates in the world as a whole. I think most of the points made in my post still work, however, even against the more reasonable metric of GDP-of-the-most-technologically-advanced-country. Moreover, I think even the point you were specifically critiquing still stands: If AI will be like the Industrial Revolution but faster, then crazy stuff will be happening pretty early on in the curve. Here's the data I got from Wikipedia a while back on world GDP growth rates. Year is the column on the left, annual growth rate (extrapolated) is in the column on the right. 170032099.80.40%165037081.740.12%160042077.010.27%150052058.670.27%140062044.92 0.21%135067040.50.47%130072032.09-0.21%125077035.58-0.10%120082037.44-0.06%1100 92039.60.11%1000102035.310.11%900112031.680.23%800122025.230.07%700132023.44 0.12%600142020.860.05%500152019.920.08%400162018.440.06%350167017.93-0.02%200 182018.540.03%14200617.5-0.43%1201918.50.04%-2002220170.03%-400242016.020.16% -500252013.720.12%-80028209.720.21%On this data at least, 1700 is the first time an observer would say "OK yeah maybe we are transitioning to a new faster growth mode" (assuming you discount 1350 as I do as an artefact of recovering from various disasters). Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards. (Your data was for population whereas mine is for GWP, maybe that accounts for the discrepancy.) EDIT: Also, I picked 1700 as precisely the time when "Things seem to be blowing up" first became true. My point was that the point of no return was already past by then. To be fair, maybe my data is shitty.
Some thoughts on the EA Munich // Robin Hanson incident

I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).

It doesn't currently seem like thinking or working on this issue should be a priority for me (eve... (read more)

I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.

I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people's minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just try... (read more)

Hiring engineers and researchers to help align GPT-3

My process was to check the "About the forum" link on the left hand side, see that there was a section on "What we discourage" that made no mention of hiring, then search for a few job ads posted on the forum and check that no disapproval was expressed in the comments of those posts.

Hiring engineers and researchers to help align GPT-3

I think that a scaled up version of GPT-3 can be directly applied to problems like "Here's a situation. Here's the desired result. What action will achieve that result?" (E.g. you can already use it to get answers like "What copy will get the user to subscribe to our newsletter?" and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)

I think that if GPT-3 was more powerful then many people would apply it to problems like that. I'm conc... (read more)

8ShayBenMoshe2y
Thanks for the response. I believe this answers the first part, why GPT-3 poses an x-risk specifically. Did you or anyone else ever write what aligning a system like GPT-3 looks like? I have to admit that it's hard for me to even have a definition of being (intent) aligned for a system GPT-3, which is not really an agent on its own. How do you define or measure something like this?
Hiring engineers and researchers to help align GPT-3

No, I'm talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is "trying" to do what we want. We are a relatively focused technical team, and a minority of the organization's investment in safety and preparedness.

The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.

Some thoughts on the EA Munich // Robin Hanson incident
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and pub
... (read more)

To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don't recall all that was said, but I think a large part of my argument was that "jumping ship" or being forced off for ideological reasons was not "fine" when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I'm not sure if this changed Paul's mind.

Does Economic History Point Toward a Singularity?

Thanks, super helpful.

(I don't really buy an overall take like "It seems unlikely" but it doesn't feel that mysterious to me where the difference in take comes from. From the super zoomed out perspective 1200 AD is just yesterday from 1700AD, it seems like random fluctuations over 500 years are super normal and so my money would still be on "in 500 years there's a good chance that China would have again been innovating and growing rapidly, and if not then in another 500 years it's reasonably likely..." It makes sense to describe that situation as "nowhere close to IR" though. And it does sound like the super fast growth is a blip.)

Does Economic History Point Toward a Singularity?

I took numbers from Wikipedia but have seen different numbers that seem to tell the same story although their quantitative estimates disagree a ton.

The first two numbers are all higher than growth rates could have plausibly been in a sustained way during any previous part of history (and the 0-1000AD one probably is as well), and they se... (read more)

Does Economic History Point Toward a Singularity?

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

If you are trying to model things at the level that Roodman or I are, the difference between 1400 and 1600 just isn't a big deal, the noise terms are on the order of 500 years at that point.

So maybe the interesting question is if and why scholars think that China wouldn't have had an IR shortly after Europe (i.e. within a few cen... (read more)

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

I believe the population surge is closely related to the European population surge: it's largely attributed to the Colombian exchange + expanded markets/trade. One of the biggest things is that there's an expansion in the land under cultivation, since potatoes and maize can be grown on marginal land that wouldn't otherwise work well for rice... (read more)

Does Economic History Point Toward a Singularity?
My model is that most industries start with fast s-curve like growth, then plateau, then often decline

I don't know exactly what this means, but it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries. The obvious examples to me are semiconductors and AI since I deal most with those. But it also seems true of e.g. manufacturing, agricultural productivity, batteries, construction costs. It seems like industries where the productivity vs time curve is a &... (read more)

it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries

This agrees with my impression. Just in case someone is looking for references for this, see e.g.:

  • Nagy et al. (2013) - several of the trends they look at, e.g. prices for certain chemical substances, show exponential growth for more than 30 years
  • Farmer & Lafond (2016) - similar to the previous paper, though fewer trends with data from more than 20 years
  • Bloom et al. (2020) - reviews trends in research
... (read more)
Does Economic History Point Toward a Singularity?

It feels like you are drawing some distinction between "contingent and complicated" and "noise." Here are some possible distinctions that seem relevant to me but don't actually seem like disagreements between us:

  • If something is contingent and complicated, you can expect to learn about it with more reasoning/evidence, whereas if it's noise maybe you should just throw up your hands. Evidently I'm in the "learn about it by reasoning" category since I spend a bunch of time thinking about AI forecasting.
  • If something
... (read more)
Does Economic History Point Toward a Singularity?

I think Roodman's model implies a standard deviation of around 500-1000 years for IR timing starting from 1000AD, but I haven't checked. In general for models of this type it seems like the expected time to singularity is a small multiple of the current doubling time, with noise also being on the order of the doubling time.

The model clearly underestimates correlations and hence the variance here---regardless of whether we go in for "2 revolutions" or "randomly spread out" we can all agree that a stagnant doubling is more likel... (read more)

5djbinder2y
I'm curious what numbers you are using for Europe's growth between 1000-1700; I didn't think European growth over that period was particularly unusual. It is worth remembering that Europe in 1000 (particularly northern Europe) was a backwater and so benefitted from catchup growth relative to (say) China. I also don't know how much of European growth was driven by extensive growth in eastern Europe, which doesn't seem to be relevant that to the great divergence. Arguments against the idea that Europe c1700 was technologically ahead of the rest of Eurasia (r at least, China) are common in the great divergence literature. A good recent discussion is Chapter 16 of A Culture of Growth by Mokyr; he discusses various similarities and differences between the two regions around 1700. For detailed discussion focussed on military questions, see The Gunpowder Age by Andrade and Why did Europe conquer the world? by Hoffman, both of which argue that the gap between European and Chinese military technology was not very large during the 1600s. For what it's worth I think Europe development was distinct from previous economic efflorescences in so far as it took place in the context of a fractured political landscape. Most other examples (Rome, Abbasid caliphate, many Chinese dynasties) seem to be driven by political unification allowing the growth and diversification of markets; a discussion focussed on the roman example can be found in The Roman Market Economy by Temin. This seems different to the situation in Europe c1700. For what it's worth it seems to me that the most plausible explanations for the great divergence are rooted in European fragmentation. This allowed a number of different economic, political, and cultural arrangements to be explored while competitive pressure encouraged more efficient institutions to be adopted. A recent discussion of this can be found in Escape from Rome by Schiedel, but the argument is made in many other places and underpins a number of oth
4Ben Garfinkel2y
[Caveat to all of the below is that these are vague impressions, based on scattered reading. I invite anyone with proper economic history knowledge to please correct me.] I'm reasonably sympathetic to the first possibility. I think it’s somewhat contentious whether Europe or China was more ‘developed’ in 1700. In either case, though, my impression is that the state of Europe in 1700 was non-unprecedented along a number of dimensions. The error bars are still pretty large here, but it’s common to estimate that Europe’s population increased by something like 50% between 1500 and 1700. (There was also probably a surge between something like 1000AD and 1300AD, as Western Europe sort of picked itself back up from a state of collapse, although I think the actual numbers are super unclear. Then the 14th century has famine and the Black Death, which Europe again needs to recover from.) Something like a 50% increase over a couple centuries definitely couldn’t have been normal, but it’s also not clearly unprecedented. It seems like population levels in particular regions tended to evolve through a series of surges and contractions. We don't really know these numbers — although, I think, they’re at least inspired by historical records — but the McEvedy/Jones estimates show a 100% population increase in two centuries during the Song Dynasty (1000AD - 1200AD). We super don't know most of these numbers, but it seems conceivable that other few-century efflorescences were associated with similar overall growth rates: for example, the Abbasid Caliphate, the Roman Republic/Empire during its rise, the Han dynasty, the Mediterranean in the middle of the first century BCE. These numbers are also presumably sketchy, but England’s estimated GDP-per-capita in 1700AD was also roughly the same as China’s estimated GDP-per-capita in 1000AD (according to a chart in British Economic Growth, 1270-1870); England is also thought to have been richer than other European states, with the exceptio
Does Economic History Point Toward a Singularity?
I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pa
... (read more)
5Ben Garfinkel2y
I agree the richer stories, if true, imply a more ignorant perspective. I just think it's plausible that the more ignorant perspective is the correct perspective. My general feeling towards the evolution of the economy over the past ten thousand years, reading historical analysis, is something like: “Oh wow, this seems really complex and heterogeneous. It’d be very surprising if we could model these processes well with a single-variable model, a noise term, and a few parameters with stable values.” It seems to me like we may in fact just be very ignorant. Fossil fuels wouldn't be the cause of the higher global growth rates, in the 1500-1800 period; coal doesn't really matter much until the 19th century. The story with fossil fuels is typically that there was a pre-existing economic efflorescence that supported England's transition out of an 'organic economy.' So it's typically a sort of tipping point story, where other factors play an important role in getting the economy to the tipping point. I'm actually unsure of this. Something that's not clear to me is to what extent the distinction is being drawn in a post-hoc way (i.e. whether intensive agriculture is being implicitly defined as agriculture that kicks off substantial population growth). I don’t know enough about this. I don't think I agree, although I’m not sure I understand your objection. Supposing we had accurate data, it seems like the best approach is running a regression that can accommodate either hyperbolic or exponential growth — plus noise — and then seeing whether we can reject the exponential hypothesis. Just noting that the growth rate must have been substantially higher than average within one particular millennium doesn’t necessarily tell us enough; there’s still the question of whether this is plausibly noise. Of course, though, we have very bad data here -- so I suppose this point doesn't matter too much either way. You don’t need a story about why they changed at roughly the same time
Does Economic History Point Toward a Singularity?
because I have a bunch of very concrete, reasonably compelling sounding stories of specific things that caused the relevant shifts

Be careful that you don't have too many stories, or it starts to get continuous again.

More seriously, I don't know what the small # of factors are for the industrial revolution, and my current sense is that the story can only seem simple for the agricultural revolution because we are so far away and ignoring almost all the details.

It seems like the only factor that looks a priori like it should cause a discontinuity is... (read more)

I mean something much more basic. If you have more parameters then you need to have uncertainty about every parameter. So you can't just look at how well the best "3 exponentials" hypothesis fits the data, you need to adjust for the fact that this particular "3 exponentials" model has lower prior probability. That is, even if you thought "3 exponentials" was a priori equally likely to a model with fewer parameters, every particular instance of 3 exponentials needs to be less probable than every particular model with fewer parameters.

Thanks, this was a usef... (read more)

Does Economic History Point Toward a Singularity?

This would be an important update for me, so I'm excited to see people looking into it and to spend more time thinking about it myself.

High-level summary of my current take on your document:

  • I agree that the 1AD-1500AD population data seems super noisy.
  • Removing that data removes one of the datapoints supporting continuous acceleration (the acceleration between 10kBC - 1AD and 1AD-1500AD) and should make us more uncertain in general.
  • It doesn't have much net effect on my attitude towards continuous acceleration vs discontinuous jumps, this mostly pu
... (read more)
3abergal2y
I'm going to try and restate what's going on here, and I want someone to tell me if it sounds right: * If your prior is that growth rate increases happen on a timescale determined by the current growth rate, e.g. you're likely to have a substantial increase once every N doublings of output, you care more about later years in history when you have more doublings of output. This is what Paul is advocating for. * If your prior is that growth rate increases happen randomly throughout history, e.g. you're likely to have a substantial increase at an average rate of once every T years, all the years in history should have the same weight. This is what Ben has done in his regressions. The more weight you start with on the former prior, the more strongly you should weight later time periods. In particular: If you start with a lot of weight on the former prior, then T years of non-accelerating data at the beginning of your dataset won't give you much evidence against it, because it won't correspond to many doublings. But T years of non-accelerating data at the end of your dataset would correspond to many doublings, so would be more compelling evidence against.
7Ben Garfinkel2y
Hi Paul, Thanks for your super detailed comment (and your comments on the previous version)! I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pattern in each previous era was basically a hard-to-characterize mess, but was constrained by a rough upper bound on the maximum achievable growth rate. This last possibility is the one I personally find most likely, of the non-hyperbolic possibilities. (I think the pre-agricultural period is especially likely to be messy, since I would guess that human evolution and climate/environmental change probably explain the majority of the variation in population levels within this period.) I think this is a good and fair point. I'm starting out sympathetic toward the breakthrough/phase-change perspective, in large part because this perspective fits well with the kinds of narratives that economic historians and world historians tend to tell. It's reasonable to wonder, though, whether I actually should give much weight to these narratives. Although they rely on much more than just world GDP estimates, their evidence base is also far from great, and they disagree on a ton of issues (there are a bunch of competing economic narratives that only partly overlap.) A lot of my prior comes down to my impression that the dynamics of growth just *seem * very different to me for forager societies, agricultural/organic societies, and industrial/fossil-fuel societies. In the forager era, for example, it's possible that, for the majority of the period, human evolution was the main underlying

I feel really confused what the actual right priors here are supposed to be. I find the "but X has fewer parameters" argument only mildly compelling, because I feel like other evidence about similar systems that we've observed should easily give us enough evidence to overcome the difference in complexity. 

This does mean that a lot of my overall judgement on this question relies on the empirical evidence we have about similar systems, and the concrete gears-level models I have for what has caused growth. AI Impact's work on discontinuous vs. continuous... (read more)

This is one of my favorite comments on the Forum. Thanks for the thorough response.

How Much Leverage Should Altruists Use?
This is only 2.4 standard deviations assuming returns follow a normal distribution, which they don't.

No, 2.4 standard deviations is 2.4 standard deviations.

It's possible to have distributions for which what's more or less surprising.

For a normal distribution, this happens about one every 200 periods. I totally agree that this isn't a factor of 200 evidence against your view. So maybe saying "falsifies" was too strong.

But no distribution is 2.35 standard deviations below its mean with probability more than 18%. That's lite... (read more)

How Much Leverage Should Altruists Use?

I haven't done a deep dive on this but I think futures are better than this analysis makes them look.

Suppose that I'm in the top bracket and pay 23% taxes on futures, and that my ideal position is 2x SPY.

In a tax-free account I could buy SPY and 1x SPY futures, to get (2x SPY - 1x interest).

In a taxable account I can buy 1x SPY and 1.3x SPY futures. Then my after-tax expected return is again (2x SPY - 1x interest).

The catch is that if I lose money, some of my wealth will take the form of taxable losses that I can use to offset gains in future yea... (read more)

3Wei_Dai2y
This is a really interesting and counterintuitive idea, that I really like, but after thinking about it a lot, decided probably does not work. Here's my argument. For simplicity let's assume that I know for sure I'm going to die in 30 years[1] and I'm planning to donate my investment to a tax-exempt org at that point, and ignore dividends[2]. First, the reason I'm able to get a better expected return buying stocks instead of a 30-year government bond is that the market is compensating me for the risk that stocks will be worth less than the 30-year government bond at the end of 30 years. If that happens, I'm left with 0.3x more losses by buying 1.3x futures instead of 1x stock, but the tax offset I incurred is worth nothing because they go away when I die so they don't compensate me for the extra losses. (I don't think there's a way to transfer them to another person or entity?) So (compared to leveraged buy-and-hold) the futures strategy gives you equal gains if stocks do better than risk free return, but is 0.3x worse if stocks do worse than risk free return. Therefore leveraged buy-and-hold does seem to represent a significant free lunch (ultimately coming out of government pockets) compared to futures. ETA: The situation is actually worse than this because there's a significant risk that during the 30 years the market first rises and then falls, so I end up paying taxes on capital gains during the rise, that later become taxable losses that become worthless when I die. ETA2: To summarize/restate this in a perhaps more intuitive way, comparing 1x stocks with 1x futures, over the whole investment period stocks give you .3x more upside potential and the same or lower downside risk. [1] Are you perhaps assuming that you'll almost certainly live much longer than that? [2] Re: dividends, my understanding is that equity futures are a pure bet on stock prices and ignore dividends, but buying ETFs obviously does give you dividends, so (aside from taxes) equity futures
How Much Leverage Should Altruists Use?

I'm surprised by (and suspicious of) the claim about so many more international shares being non-tradeable, but it would change my view.

I would guess the savings rate thing is relatively small compared to the fact that a much larger fraction of US GDP is inevestable in the stock market---the US is 20-25% of GDP, but the US is 40% of total stock market capitalization and I think US corporate profits are also ballpark 40% of all publicly traded corporate profits. So if everyone saved the same amount and invested in their home country, US equities would ... (read more)

How Much Leverage Should Altruists Use?

I also like GMP, and find the paper kind of surprising. I checked the endpoints stuff a bit and it seems like it can explain a small effect but not a huge one. My best guess is that going from equities to GMP is worth like +1-2% risk-free returns.

How Much Leverage Should Altruists Use?

I like the basic point about leverage and think it's quite robust.

But I think the projected returns for VMOT+MF are insane. And as a result the 8x leverage recommendation is insane, someone who does that is definitely just going to go broke. (This is similar to Carl's complaint.)

My biggest problem with this estimate is that it kind of sounds crazy and I don't know very good evidence in favor. But it seems like these claimed returns are so high that you can also basically falsify them by looking at the data between when VMOT was founded and w... (read more)

2MichaelDickens2y
This is a totally reasonable objection, and I will try my best to respond. Sorry if this reply is a little disjointed/hard to follow. To be clear, 8x leverage is not a recommendation; it is the result of a particular analysis with many limitations—I tried to cover the important ones in Caveats [https://forum.effectivealtruism.org/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use#caveats] . In light of these caveats, 8x leverage does not seem reasonable. That said, I disagree that the projected returns are insane. I agree that they look insane, and when I was writing this, I had some tension between trying to sound sane and representing my true beliefs, and I decided that the latter was more important. I don't overly trust backtests, but I trust the process behind VMOT, which is (part of the) reason to believe the cited backtest is reflective of the strategy's long-term performance.[2] VMOT projected returns were based on a 20-year backtest, but you can find similar numbers by looking at much longer data series (e.g., Value and Momentum Everywhere [https://www.aqr.com/Insights/Research/Journal-Article/Value-and-Momentum-Everywhere] ). VMOT backtest gives higher expected returns than generic value/momentum backtests[1], and I believe this is not due to data-mining, but I don't think there's really an efficient way for me to justify this belief other than to say read the books (Quantitative Momentum [https://www.amazon.com/Quantitative-Value-Practitioners-Intelligent-Eliminating-ebook/dp/B00B1FK0AS] and Quantitative Value [https://www.amazon.com/Quantitative-Value-Practitioners-Intelligent-Eliminating-ebook/dp/B00B1FK0AS] ), which explain why the authors believe their particular implementations of momentum and value have (slightly) better expected return. If you assume VMOT will have returns commensurate with a generic value/momentum strategy, you might get a lower expected return than 9%, but note that Research Affiliates' estimates for generic valu
How Much Leverage Should Altruists Use?
We could account for this by treating mean return and standard deviation as distributions rather than point estimates, and calculating utility-maximizing leverage across the distribution instead of at a single point. This raises a further concern that we don’t even know what distribution the mean and standard deviation have, but at least this gets us closer to an accurate model.

Why not just take the actual mean and standard deviation, averaging across the whole distribution of models?

What exactly is the "mean" you are quoting, if it's not your subjective expectation of returns?

(Also, I think the costs of choosing leverage wrong are pretty symmetric.)

How Much Leverage Should Altruists Use?

My understanding is that the sharpe ratio of the global portfolio is quite similar to the equity portfolio (e.g. see here for data on the period from 1960-2017, finding 0.36 for the global market and 0.37 for equities).

I still do expect the broad market to outperform equities alone, but I don't know where the super-high estimates for the benefits of diversification are coming from, and I expect the effect to be much more modest then the one described in the linked post by Ben Todd. Do you know what's up with the discrepancy? It could be about cho... (read more)

2Benjamin_Todd2y
My estimates came from the book Global Asset Allocation by Meb Faber. I expect it's less rigorous than the paper you link to, so I suppose we should trust the paper more. I did find the results of the paper pretty surprising though. It just makes a lot of intuitive sense that bonds with anticorrelate with equities during recessions and real assets will anticorrelate during inflation shocks, which should reduce the risk quite a bit. Also, all the other estimates I seen show that adding bonds to an all equity portfolio significantly increases sharpe (usually from ~0.3 to ~0.4). (And also that adding real assets helps too, though these estimates are less common.) I'm wondering if the period used in paper might have been an unusually good time for equities. Meb Faber uses the period 1973-2013. The paper uses 1960 to 2017. 2017 was near a high for the equity market, whereas 2013 was more mid-cycle, which will favour equities. The 60s were a good time for equities, while the 70s were bad, so adding the 60s into the range will boost equities. Ideally I'd also compare the percentages in each asset. One difference is that Faber's 'GAA' allocation includes 5% gold, which usually seems to improve sharpe quite a bit, since gold was one of the only assets that did well in the 70s*. Faber also gets similar results with what he calls the 'Arnott Portfolio [https://mebfaber.com/2015/06/01/chapter-7-the-rob-arnott-portfolio/]' which doesn't include gold and is fairly in line with my estimate of the global capital portfolio, except using TIPs+REITs+commodities instead of private real estate. I'd also trust the GMP a lot more out of sample due to the theoretical underpinning. *My theory for why gold helps sharpe (even though it's only 1% of total wealth) is that the global capital portfolio includes a lot of private real estate that is not in the global portfolio of listed assets. This means most 'global market portfolios' are light on real assets, and adding some gold/TIPs/comm
How Much Leverage Should Altruists Use?
To use leverage, you will probably end up having to pay about 1% on top of short-term interest rates

Not a huge deal, but it seems like the typical overhead is about 0.3%:

... (read more)
3Wei_Dai2y
Side note on tax considerations of financing methods (for investing in taxable accounts): * With futures you are forced to realize capital gains or losses at the end of every year even if you hold the futures longer than that. * With either box spread financing or margin loans, if you buy and hold investments that rise in value, you don't have to realize capital gains and can avoid paying capital gains taxes on them altogether if you donate those investments later. * With box spread financing, the interest you pay appears in the form of capital losses (upon expiration of the box spread options, in other words the loan), which you can use to offset your capital gains if you have any, but can't reduce your other taxable income such as dividend or interest income (except by a small fixed amount each year). * With margin loans, your interest expense is tax deductible but you have to itemize deductions (which means you give up your standard deductions). * With futures, the interest you "pay" is baked into the amount of capital gains/losses you end up with. I think (assuming the same implicit/explicit interest rates for all 3 financing methods) for altruists investing in taxable accounts, this means almost certainly avoiding futures, and considering going with margin loans over box spread financing if you have significant interest expenses and don't have a lot of realized capital gains each year that you can offset. (Note that currently, possibly for a limited time, it's possible to lock in a 2.7-year interest rate using box options, around .6%, that is lower than IB's minimum interest rate, .75%, so the stated assumption doesn't hold.)
How Much Leverage Should Altruists Use?

I think it's pretty dangerous to reason "asset X has outperformed recently, so I expect it to outperform in the future." An asset can outperform because it's becoming more expensive, which I think is partly the case here.

This is most obvious in the case of bonds---if 30-year bonds from A are yielding 2%/year and then fall to 1.5%/year over a decade, while 30-year bonds from B are yielding 2%/year and stay at 2%/year, then it will look like the bonds from A are performing about twice as well over the decade. But this is a very bad reason... (read more)

2Wei_Dai2y
Thanks for engaging on this. I've been having trouble making up my mind about international equities, which is delaying my plan to leverage up (while hedging due to current market conditions), and it really helps to have someone argue the other side to make sure I'm not missing something. Assuming EMH, A's yield would only have fallen if it has become less risky, so buying A isn't actually bad, unless also buying B provides diversification benefits. Applying this to stocks, we can say that under EMH buying only US stocks has no downsides unless international equities provide diversification benefits, and since they have been highly correlated in recent decades (after about 1990) we lose very little by buying only US stocks. Of course in the long run this high correlation between US and international equities can't last forever, but it seems to change slowly enough over time that I can just diversify into international equities when it looks like they've started to decorrelate. US stock is 35% [https://finance.townhall.com/columnists/politicalcalculations/2019/07/24/foreign-ownership-of-us-stocks-n2550535] owned by non-US investors as of 2018 and had been going up recently. Meantime non-US stock is probably >90% owned by non-US investors (not sure how to find the data directly, but US investors only have 10% [https://www.nber.org/digest/may02/w8680.html] international equities in their stock portfolio). My interpretation is that non-US investors are still under-weighing US stocks but have reduced their bias recently and this contributed to US outperformance, and the trend can continue for a while longer before petering out. A lot of my thinking here comes from observing that people in places like China have much higher savings rates, but it's a big hassle at best for them to invest in US stocks (due to anti-money laundering and tax laws) and many have just never even thought in that direction, so international investment opportunities have been exhausted to a gr
How worried should I be about a childless Disneyland?

My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.

If our plan for building AI depends on having clarity about our values, then it's important to achieve such clarity before we build AI---whether that's clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.

I agree consciousness is a big ? in our axiology, though it's not clear if ... (read more)

1EdoArad3y
paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value). Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won't realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?
How worried should I be about a childless Disneyland?

I don't think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.

If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we'd regard as good (even though it likely contains very few minds that resemble either us or them).

If AI systems are conscious but not at all aligned with ... (read more)

3EdoArad3y
This argument presupposes that the resulting AI systems are either totally aligned with us (and our extrapolated moral values) or totally misaligned. If there is much room for successful partial alignment (say, maximising on some partial values we have), and we can do actual work to steer that to something which is better, then it may well be the case that we should work on that. Specifically, if we imagine the AI systems to maximise some hard coded value (or something which was learned from a single database) then it is seems easy to make a case for working on understanding what is morally valuable before working on alignment. I'm sure that there are existing discussions on this question which I'm not familiar with. I'd be interested in relevant references.
Load More