All of trammell's Comments + Replies

Whoops, thanks! Issues importing from the Google doc… fixing now.

Good to hear, thanks!

I‘ve just edited the intro to say: it’s not obvious to me one way or the other whether it's a big deal in the AI risk case. I don't think I know much about the AI risk case (or any other case) to have much of an opinion, and I certainly don't think anything here is specific enough to come to a conclusion in any case. My hope is just that something here makes it easier to for people who do know about particular cases to get started thinking through the problem.

If I have to make a guess about the AI risk case, I'd emphasize my conjecture... (read more)

Thanks for noting this. If in some case there is a positive level of capabilities for which P is 1, then we can just say that the level of capabilities denoted by C = 0 is the maximum level at which P is still 1. What will sort of change is that the constraint will be not C ≥ 0 but C ≥ (something negative), but that doesn't really matter since here you'll never want to set C<0 anyway. I've added a note to clarify this.

Maybe a thought here is that, since there is some stretch of capabilities along which P=1, we should think that P(.) is horizontal around... (read more)

Hey David, I've just finished a rewrite of the paper which I'm hoping to submit soon, which I hope does a decent job of both simplifying it and making clearer what the applications and limitations are: https://philiptrammell.com/static/Existential_Risk_and_Growth.pdf

Presumably the referees will constitute experts on the growth front at least (if it's not desk rejected everywhere!), though the new version is general enough that it doesn't really rely on any particular claims about growth theory.

Hold on, just to try wrapping up the first point--if by "flat" you meant "more concave", why do you say "I don't see how [uncertainty] could flatten out the utility function. This should be in "Justifying a more cautious portfolio"?"

Did you mean in the original comment to say that you don't see how uncertainty could make the utility function more concave, and that it should therefore also be filed under "Justifying a riskier portfolio"?

I can't speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like "habit formation", could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, t... (read more)

2
Simon_M
Argh, yes. I meant more concave. No, it doesn't make sense. "We don't know the curvature, ergo it could be anything" is not convincing. What you seem to think is "concrete" seems entirely arbitrary to me.

Thanks! As others have commented, the strength of this consideration (and of many of the other considerations) is quite ambiguous, and I’d love to see more research on it. But at least qualitatively, I think it’s been underappreciated by existing discussion.

Thanks! Hardly the first version of an article like this (or most clearly written), but hopefully a bit more thorough…!

I agree! As noted under Richard’s comment, I’m afraid my only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work. (And I won’t be free again for a while…)

If you or anyone else reading this manages to write one in the meantime, send it over and I’ll stick it at the top.

Thanks! I agree that would be helpful. My only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work…

Hi Peter, thanks again for your comments on the draft! I think it improved it a lot. And sorry for the late reply here—just got back from vacation.

I agree that the cause variety point includes what you might call “sub-cause variety” (indeed, I changed the title of that bit from “cause area variety” to “cause variety” for that reason). I also agree that it’s a really substantial consideration: one of several that can single-handedly swing the conclusion. I hope you/others find the simple model of Appendix C helpful for starting to quantify just how substant... (read more)

Hi, sorry for the late reply--just got back from vacation.

As with most long posts, I expect this post has whatever popularity it has not because many people read it all, but because they skimmed parts and thought they made sense, and thought the overall message resonated with their own intuitions. Likewise, I expect your comment has whatever popularity it has because they have different intuitions, and because it looks on a skim as though you’ve shown that a careful reading of the post validates those intuitions instead…! But who knows.

Since there are hard... (read more)

2
Simon_M
I appreciate you think that, and I agree that Michael has said he agrees, but I don't understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don't see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments. If you think my weighings and comments about your conclusions relied a little too much on intuituion, I'll happily spell out those arguments in more detail. Let me know which ones you disagree with and I'll go into more detail. I think we might be talking cross purposes here. By flattening here, I meant "less concave" - hence more risk averse. I think we agree on this point? Ah - this is the problem with editing your posts. It's actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don't find any of the arguments convincing. For example I just thought this was a totally unrealistic model in multiple dimensions, and don't really think it's relevant to anything? I didn't see it as being any different from me just saying "Imagine a philanthropist with arbitrary utility function which is less more curved than an individual".

Thanks!

No actually, we’re not assuming in general that there’s no secret information. If other people think they have the same prior as you, and think you’re as rational as they are, then the mere fact that they see you disagreeing with them should be enough for them to update on. And vice-versa. So even if two people each have some secret information, there’s still something to be explained as to why they would have a persistent public disagreement. This is what makes the agreement theorem kind of surprisingly powerful.

The point I’m making here though is ... (read more)

2
ChanaMessinger
Right, right, I think on some level this is very unintuitive, and I appreciate you helping me wrap my mind around it - even secret information is not a problem as long as people are not lying about their updates (though if all updates are secret there's obviously much less to update on)

Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.

On the point about "EA tenets": if you mean normative tenets, then yes, how much you want to update on others' views on that front might be different from how much you want to update on others' empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing--along the lines of this post, say) or more like preferences (in which case y... (read more)

It should, thanks! Fixed

That said, thanks for sharing the Anthropic Decision Theory paper! I’ll check it out.

The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(trea... (read more)

4
richard_ngo
From my perspective it's the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like "in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue"). In practice, fortunately, even people who endorse strong epistemic modesty don't actually implement it, and thereby manage to still do useful things. But I haven't yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth. By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don't expect that you believe me about this, since I haven't yet written this argument up clearly (although I hope to do so soon). But in some sense I'm not claiming anything new here: I think that an individual's all-things-considered deferential credences aren't very useful for almost the exact same reason that it's not very useful to take a group of people and aggregate their beliefs into a single set of "all-people-considered" credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).

I'm a bit confused by this. Suppose that EA has a good track record on an issue where its beliefs have been unusual from the get-go.... Then I should update towards deferring to EAs

I'm defining a way of picking sides in disagreements that makes more sense than giving everyone equal weight, even from a maximally epistemically modest perspective. The way in which the policy "give EAs more weight all around, because they've got a good track record on things they've been outside the mainstream on" is criticizable on epistemic modesty grounds is that one could ... (read more)

2
richard_ngo
I reject the idea that all-things-considered probabilities are "right" and inside-view probabilities are "wrong", because you should very rarely be using all-things-considered probabilities when making decisions, for reasons of simple arithmetic (as per my example). Tell me what you want to use the probability for and I'll tell you what type of probability you should be using. You might say: look, even if you never actually use all-things-considered probabilities in the real world, at least in theory they're still normatively ideal. But I reject that too—see the Anthropic Decision Theory paper for why.

Would you have a moment to come up with a precise example, like the one at the end of my “minimal solution” section, where the argument of the post would justify putting more weight on community opinions than seems warranted?

No worries if not—not every criticism has to come with its own little essay—but I for one would find that helpful!

1
Chris Leong
Sorry, I’m trying to reduce the amount of time I spend on the forum.

Sorry, I’m afraid I don’t follow on either count. What’s a claim you’re saying would follow from this post but isn’t true?

1
Chris Leong
More weight on community opinions than you suggested.

Hey, I think this sort of work can be really valuable—thanks for doing it, and (Tristan) for reaching out about it the other day!

I wrote up a few pages of comments here (initially just for Tristan but he said he'd be fine with me posting it here). Some of them are about nitpicky typos that probably won't be of interest to anyone but the authors, but I think some will be of general interest.

Despite its length, even this batch of comments just consists of what stood out on a quick skim; there are whole sections (especially of the appendix) that I've barely r... (read more)

1
Tristan Cook
Thanks again Phil for taking the read this through and for the in-depth feedback. I hope to take some time to create a follow-up post, working in your suggestions and corrections as external updates (e.g. to the parameters of lower total AI risk funding, shorter Metaculus timelines).  This is a fair point. The initial motivator for the project was for AI s-risk funding, of which there's pretty much one large funder (and not much work is done on AI s-risk reduction outside of people and organizations and people outside the effective altruism community) though this result is entirely on AI existential risk, which is less well modeled as a single actor. My intuition is that the "one big actor" does work sufficiently well for the AI risk community given the shared goal (avoid an AI existential catastrophe) and my guess that a lot of the AI risk done by the community doesn't change the behaviour of AI labs much (i.e. it could be that they choose to put more effort into capabilities over safety because of work done by the AI risk community, but I'm pretty sure this isn't happening).  To comment on this particular error (though not to say that other errors Phil points to are not also unproblematic - I've yet to properly go through them), for what it's worth, the main results of the post suppose zero post fire alarm spending[1] and (fortunately) since in our results we use units of millions of dollars and take the initial capital to be on the order of 1000 $m, I don't think we face this problem of smaller η having the reverse than desired effect for  In a future version I expect I'll just take the post-fire alarm returns to spending to use the same returns exponent η from before the fire alarm but have some multiplier - i.e. xη  returns to spending before the fire-alarm and kxη afterwards. 1. ^ Though if one thinks there will many good opportunities to spend after a fire alarm, our main no-fire-alarm results would likely be an overestimate

By the way, someone wrote this Google doc in 2019 on "Stock Market prediction of transformative technology". I haven't taken a look at it in years, and neither has the author, so understandably enough, they're asking to remain nameless to avoid possible embarrassment. But hopefully it's at least somewhat relevant, in case anyone's interested.

1
basil.halperin
(Nice, thanks for sharing)

Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios--along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.

That said, there is one broad limitation to this analysis... (read more)

2
basil.halperin
Thanks for these comments! * The short answer here is: yes agreed, the level of real interest rates certainly seems consistent with "market has some probability on TAI and some [possibly smaller] probability on a second dark age". * Whether that's a possibility worth putting weight on -- speaking for myself, I'm happy to leave that up to readers.  * (ie: seems unlikely to me! What would the story there be? Extremely rapid diminishing returns to innovation from the current margin, or faster-than-expected fertility declines?) * As you say, perhaps the possibility of the stagnation/degrowth scenario would have other implications for other asset prices, which could be informative for assessing likelihood.

Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.

This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.

1
basil.halperin
Yes, to emphasize, the post is meant to define the situation under consideration as: "something close to a 10x increase in growth; or death". We're interested in this scenario only because it's the modal scenario in the particular world of LW/EA/AI safety. The logic of the argument does not apply as forcefully to "smaller" changes (which could potentially still be quite large), and would not apply at all if AI did not increase growth (ie did not decrease marginal utility of consumption)!

Perhaps just a technicality, but: to satisfy the transversality condition, an infinitely lived agent has to have a discount rate of at least r (1-σ). So if σ >1—i.e. if the utility function is more concave than log—then the time preference rate can be at least a bit negative.

Hey, really glad you liked it so much! And thank you for emphasizing that people should consider applying even if they worry they might not fit in--I think this content should be interesting and useful to lots of people outside the small bubbles we're currently drawing from.

Thanks Bruce! Definitely agreed that it was an amazing crowd : )

Thanks James, really glad to hear you feel you got a lot out of it (including after a few months' reflection)!

I’m an econ grad student and I’ve thought a bit about it. Want to pick a time to chat? https://calendly.com/pawtrammell

Thanks for writing this! For all the discussion that population growth/decline has gotten recently in EA(/-adjacent) circles, as a potential top cause area--to the point of PWI being founded and Elon Musk going on about it--there hasn't been much in-depth assessment of the case for it, and I think this goes a fair way toward filling that gap.

One comment: you write that "[f]or a rebound [in population growth] to happen, we would only need a single human group satisfying the following two conditions: long-run above-replacement fertility, and a high enough “r... (read more)

Charlotte sort of already addresses this, but just to clarify/emphasize: the fact that prehistoric Australia, with its low population, faced long-term economic and technological (near-)stagnation doesn't imply that adding a person to prehistoric Australia would have increased its growth rate by less than adding a person to an interconnected world of 8 billion.

The historical data on different regions' population sizes and growth rates is entirely compatible with the view that adding a person to prehistoric Australia would have increased its growth rate by more than adding a person to the world today, as implied by a more standard growth model.

Cool, thanks for thinking this through!

This is super speculative of course, but if the future involves competition between different civilizations / value systems, do you think having to devote say 96% (i.e. 24/25) of a civilization's storage capacity to redundancy would significantly weaken its fitness? I guess it would depend on what fraction of total resources are spent on information storage...?

Also, by the same token, even if there is a "singleton" at some relatively early time, mightn't it prefer to take on a non-negligible risk of value drift later ... (read more)

6
Lukas Finnveden
Depends on how much of their data they'd have to back up like this. If every bit ever produced or operated on instead had to be be 25 bits — that seems like a big fitness hit. But if they're only this paranoid about a few crucial files (e.g. the minds of a few decision-makers), then that's cheap. And there's another question about how much stability contributes to fitness. In humans, cancer tends to not be great for fitness. Analogously, it's possible that most random errors in future civilizations would look less like slowly corrupting values and more like a coordinated whole splintering into squabbling factions that can easily be conquered by a unified enemy. If so, you might think that an institution that cared about stopping value-drift and an institution that didn't would both have a similarly large interest in preventing random errors. The counter-argument is that it will be super rich regardless, so it seems like satiable value systems would be happy to spend a lot on preventing really bad events from happening with small probability. Whereas instabiable value systems would notice that most resources are in the cosmos, and so also be obsessed with avoiding unwanted value drift. But yeah, if the values contain a pure time preference, and/or doesn't care that much about the most probable types of value drift, then it's possible that they wouldn't deem the investment worth it.

Thanks, great post!

You say that "using digital error correction, it would be extremely unlikely that errors would be introduced even across millions or billions of years. (See section 4.2.) " But that's not entirely obvious to me from section 4.2. I understand that error correction is qualitatively very efficient, as you say, in that the probability of an error being introduced per unit time can be made as low as you like at the cost of only making the string of bits a certain small-seeming multiple longer (and my understanding is that multiple shrink... (read more)

This is a great question. I think the answer depends on the type of storage you're doing.

If you have a totally static lump of data that you want to encode in a harddrive and not touch for a billion years, I think the challenge is mostly in designing a type of storage unit that won't age. Digital error correction won't help if your whole magnetism-based harddrive loses its magnetism. I'm not sure how hard this is.

But I think more realistically, you want to use a type of hardware that you regularly use, regularly service, and where you can copy the informati... (read more)

Glad to hear you find the topics interesting!

First, I should emphasize that it's not designed exclusively for econ grad students. The opening few days try to introduce enough of the relevant background  material that mathematically-minded people of any background can follow the rest of it. As you'll have seen, many of the attendees were pre-grad-school, and 18% were undergrads. My impression from the feedback forms and from the in-person experience is that some of the undergrads did struggle, unfortunately, but others got a lot out of it. Check out th... (read more)

Thanks for this!

My understanding is that some assets claimed to have a significant illiquidity premium don’t really, including (as you mention) private equity and real estate, but some do, e.g. timber farms: on account of the asymmetric information, no one wants to buy it without prospecting it to see how the trees are coming along. Do you disagree that low-DR investors should disproportionately buy timber farms (at least if they’re rich enough to afford the transaction costs)?

Also, just to clarify my point about 100-year leases from Appendix E: I wasn’t r... (read more)

5
MichaelDickens
I agree that, if an investment like a timber farm does earn a genuine illiquidity premium, then low-DR investors should like it more than high-DR investors. I calculated under "Theoretical illiquidity premium" that low-DR investors should invest a few extra percentage points in illiquid investments (the exact number depending on parameters). A few percentage points is not that big a difference, so I'd consider it a low-priority change. I don't know much about timber farms, I know I've heard a few people recommend it as a diversifier and that it's not popular outside of very wealthy investors. Seems plausible that it could be a differentially good investment for low-DR philanthropists. Thanks for the Giglio et al. reference, I'll take a look at that.

Haha okay, thank you! I agree that it’ll be great if clear examples of impact like this inspire more people to do work along these lines. And I appreciate that aiming for clear impact is valuable for researchers in general for making sure our claims of impact aren’t just empty stories.

FWIW though, I also think it could be misleading to base our judgment of the impact of some research too much on particular projects with clear and immediate connections to the research—especially in philosophy, since it’s further “upstream”. As this 80k article argues, most ... (read more)

I expect that different people at GPI have somewhat different goals for their own research, and that this varies a fair bit between philosophy and economics. But for my part,

  • my primarily goal is to do research that philanthropists find useful, and
  • my secondary goal is to do research that persuades other academics to see certain important questions in a more "EA" way, and to adjust their own curricula and research accordingly.

On the first point—and apologies if this sounds self-congratulatory or something, but I'm just providing the examples of GPI's impact ... (read more)

8
Jack Cunningham
I love that you are celebrating your successes here! Your parenthetical apologizing for potentially sounding self-congratulatory made me think, "Huh, I'd actually quite like to see more celebration of when theory turns to action." The fact that your work influenced FP to start the Patient Philanthropy Fund is a clear connection demonstrating the potential impact of this kind of research; if you were to shout that from the rooftops, I wouldn't begrudge you! If anything, clarity about real-world impacts of transformational research into the long-term future likely inspire others to pursue the field (citation needed).

Hah, sorry to hear that! But thanks for sharing--good to have yet more evidence on this front...!

Right—the primary audience is people who already have a fair bit of background in economics.

Cool! I was thinking that this course would be a sort of early-stage / first-pass attempt at a curriculum that could eventually generate a textbook (and/or other materials) if it goes well and is repeated a few times, just as so many other textbooks have begun as lecture notes. But if you'd be willing to make something online / easier-to-update sooner, that could be useful. The slides and so on won't be done for quite a while, but I'll send them to you when they are.

2
david_reinstein
Yes, it makes sense to first play with this in a flexible way, to figure out what works best and holds together best. But I would love to see your notes and think about ways to incorporate and organize them. (For me 'markdown syntax raw text' files are best ... but whatever you can share is great). By the way, I assume you are familiar with DRB's reading syllabus - An introduction to global priorities research for economists

Yup, I'll post the syllabus and slides and so on!

I'll also probably record the lectures, but probably not make them available except to the attendees, so they feel more comfortable asking questions. But if a lecture goes well, I might later use it as a template for a more polished/accessible video that is publicly available. (Some of the topics already have good lectures for online as well, though; in those cases I'd probably just link to those.)

Load more