Good to hear, thanks!
I‘ve just edited the intro to say: it’s not obvious to me one way or the other whether it's a big deal in the AI risk case. I don't think I know much about the AI risk case (or any other case) to have much of an opinion, and I certainly don't think anything here is specific enough to come to a conclusion in any case. My hope is just that something here makes it easier to for people who do know about particular cases to get started thinking through the problem.
If I have to make a guess about the AI risk case, I'd emphasize my conjecture...
Thanks for noting this. If in some case there is a positive level of capabilities for which P is 1, then we can just say that the level of capabilities denoted by C = 0 is the maximum level at which P is still 1. What will sort of change is that the constraint will be not C ≥ 0 but C ≥ (something negative), but that doesn't really matter since here you'll never want to set C<0 anyway. I've added a note to clarify this.
Maybe a thought here is that, since there is some stretch of capabilities along which P=1, we should think that P(.) is horizontal around...
Hey David, I've just finished a rewrite of the paper which I'm hoping to submit soon, which I hope does a decent job of both simplifying it and making clearer what the applications and limitations are: https://philiptrammell.com/static/Existential_Risk_and_Growth.pdf
Presumably the referees will constitute experts on the growth front at least (if it's not desk rejected everywhere!), though the new version is general enough that it doesn't really rely on any particular claims about growth theory.
Hold on, just to try wrapping up the first point--if by "flat" you meant "more concave", why do you say "I don't see how [uncertainty] could flatten out the utility function. This should be in "Justifying a more cautious portfolio"?"
Did you mean in the original comment to say that you don't see how uncertainty could make the utility function more concave, and that it should therefore also be filed under "Justifying a riskier portfolio"?
I can't speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like "habit formation", could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, t...
I agree! As noted under Richard’s comment, I’m afraid my only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work. (And I won’t be free again for a while…)
If you or anyone else reading this manages to write one in the meantime, send it over and I’ll stick it at the top.
Hi Peter, thanks again for your comments on the draft! I think it improved it a lot. And sorry for the late reply here—just got back from vacation.
I agree that the cause variety point includes what you might call “sub-cause variety” (indeed, I changed the title of that bit from “cause area variety” to “cause variety” for that reason). I also agree that it’s a really substantial consideration: one of several that can single-handedly swing the conclusion. I hope you/others find the simple model of Appendix C helpful for starting to quantify just how substant...
Hi, sorry for the late reply--just got back from vacation.
As with most long posts, I expect this post has whatever popularity it has not because many people read it all, but because they skimmed parts and thought they made sense, and thought the overall message resonated with their own intuitions. Likewise, I expect your comment has whatever popularity it has because they have different intuitions, and because it looks on a skim as though you’ve shown that a careful reading of the post validates those intuitions instead…! But who knows.
Since there are hard...
Thanks!
No actually, we’re not assuming in general that there’s no secret information. If other people think they have the same prior as you, and think you’re as rational as they are, then the mere fact that they see you disagreeing with them should be enough for them to update on. And vice-versa. So even if two people each have some secret information, there’s still something to be explained as to why they would have a persistent public disagreement. This is what makes the agreement theorem kind of surprisingly powerful.
The point I’m making here though is ...
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.
On the point about "EA tenets": if you mean normative tenets, then yes, how much you want to update on others' views on that front might be different from how much you want to update on others' empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing--along the lines of this post, say) or more like preferences (in which case y...
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(trea...
I'm a bit confused by this. Suppose that EA has a good track record on an issue where its beliefs have been unusual from the get-go.... Then I should update towards deferring to EAs
I'm defining a way of picking sides in disagreements that makes more sense than giving everyone equal weight, even from a maximally epistemically modest perspective. The way in which the policy "give EAs more weight all around, because they've got a good track record on things they've been outside the mainstream on" is criticizable on epistemic modesty grounds is that one could ...
Would you have a moment to come up with a precise example, like the one at the end of my “minimal solution” section, where the argument of the post would justify putting more weight on community opinions than seems warranted?
No worries if not—not every criticism has to come with its own little essay—but I for one would find that helpful!
Hey, I think this sort of work can be really valuable—thanks for doing it, and (Tristan) for reaching out about it the other day!
I wrote up a few pages of comments here (initially just for Tristan but he said he'd be fine with me posting it here). Some of them are about nitpicky typos that probably won't be of interest to anyone but the authors, but I think some will be of general interest.
Despite its length, even this batch of comments just consists of what stood out on a quick skim; there are whole sections (especially of the appendix) that I've barely r...
By the way, someone wrote this Google doc in 2019 on "Stock Market prediction of transformative technology". I haven't taken a look at it in years, and neither has the author, so understandably enough, they're asking to remain nameless to avoid possible embarrassment. But hopefully it's at least somewhat relevant, in case anyone's interested.
Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios--along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.
That said, there is one broad limitation to this analysis...
Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.
This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.
I’m an econ grad student and I’ve thought a bit about it. Want to pick a time to chat? https://calendly.com/pawtrammell
Thanks for writing this! For all the discussion that population growth/decline has gotten recently in EA(/-adjacent) circles, as a potential top cause area--to the point of PWI being founded and Elon Musk going on about it--there hasn't been much in-depth assessment of the case for it, and I think this goes a fair way toward filling that gap.
One comment: you write that "[f]or a rebound [in population growth] to happen, we would only need a single human group satisfying the following two conditions: long-run above-replacement fertility, and a high enough “r...
Charlotte sort of already addresses this, but just to clarify/emphasize: the fact that prehistoric Australia, with its low population, faced long-term economic and technological (near-)stagnation doesn't imply that adding a person to prehistoric Australia would have increased its growth rate by less than adding a person to an interconnected world of 8 billion.
The historical data on different regions' population sizes and growth rates is entirely compatible with the view that adding a person to prehistoric Australia would have increased its growth rate by more than adding a person to the world today, as implied by a more standard growth model.
Cool, thanks for thinking this through!
This is super speculative of course, but if the future involves competition between different civilizations / value systems, do you think having to devote say 96% (i.e. 24/25) of a civilization's storage capacity to redundancy would significantly weaken its fitness? I guess it would depend on what fraction of total resources are spent on information storage...?
Also, by the same token, even if there is a "singleton" at some relatively early time, mightn't it prefer to take on a non-negligible risk of value drift later ...
Thanks, great post!
You say that "using digital error correction, it would be extremely unlikely that errors would be introduced even across millions or billions of years. (See section 4.2.) " But that's not entirely obvious to me from section 4.2. I understand that error correction is qualitatively very efficient, as you say, in that the probability of an error being introduced per unit time can be made as low as you like at the cost of only making the string of bits a certain small-seeming multiple longer (and my understanding is that multiple shrink...
This is a great question. I think the answer depends on the type of storage you're doing.
If you have a totally static lump of data that you want to encode in a harddrive and not touch for a billion years, I think the challenge is mostly in designing a type of storage unit that won't age. Digital error correction won't help if your whole magnetism-based harddrive loses its magnetism. I'm not sure how hard this is.
But I think more realistically, you want to use a type of hardware that you regularly use, regularly service, and where you can copy the informati...
Glad to hear you find the topics interesting!
First, I should emphasize that it's not designed exclusively for econ grad students. The opening few days try to introduce enough of the relevant background material that mathematically-minded people of any background can follow the rest of it. As you'll have seen, many of the attendees were pre-grad-school, and 18% were undergrads. My impression from the feedback forms and from the in-person experience is that some of the undergrads did struggle, unfortunately, but others got a lot out of it. Check out th...
Thanks for this!
My understanding is that some assets claimed to have a significant illiquidity premium don’t really, including (as you mention) private equity and real estate, but some do, e.g. timber farms: on account of the asymmetric information, no one wants to buy it without prospecting it to see how the trees are coming along. Do you disagree that low-DR investors should disproportionately buy timber farms (at least if they’re rich enough to afford the transaction costs)?
Also, just to clarify my point about 100-year leases from Appendix E: I wasn’t r...
Haha okay, thank you! I agree that it’ll be great if clear examples of impact like this inspire more people to do work along these lines. And I appreciate that aiming for clear impact is valuable for researchers in general for making sure our claims of impact aren’t just empty stories.
FWIW though, I also think it could be misleading to base our judgment of the impact of some research too much on particular projects with clear and immediate connections to the research—especially in philosophy, since it’s further “upstream”. As this 80k article argues, most ...
I expect that different people at GPI have somewhat different goals for their own research, and that this varies a fair bit between philosophy and economics. But for my part,
On the first point—and apologies if this sounds self-congratulatory or something, but I'm just providing the examples of GPI's impact ...
There are now questions on Metaculus about whether this will pass:
https://www.metaculus.com/questions/8663/us-to-make-patient-philanthropy-harder-soon/
https://www.metaculus.com/questions/8664/patient-philanthropy-harder-in-the-us-by-30/
Cool! I was thinking that this course would be a sort of early-stage / first-pass attempt at a curriculum that could eventually generate a textbook (and/or other materials) if it goes well and is repeated a few times, just as so many other textbooks have begun as lecture notes. But if you'd be willing to make something online / easier-to-update sooner, that could be useful. The slides and so on won't be done for quite a while, but I'll send them to you when they are.
Yup, I'll post the syllabus and slides and so on!
I'll also probably record the lectures, but probably not make them available except to the attendees, so they feel more comfortable asking questions. But if a lecture goes well, I might later use it as a template for a more polished/accessible video that is publicly available. (Some of the topics already have good lectures for online as well, though; in those cases I'd probably just link to those.)
Whoops, thanks! Issues importing from the Google doc… fixing now.