L

lexande

512 karmaJoined Nov 2018

Posts
1

Sorted by New

Comments
35

lexande
5mo72

If you're in charge of investing decisions for a pension fund or sovereign wealth fund or similar, you likely can't personally derive any benefit from having the fund sell off its bonds and other long-term assets now. You might do this in your personal account but the impact will be small.

For government bonds in particular it also seems relevant that I think most are held by entities that are effectively required to hold them for some reason (e.g. bank capital requirements, pension fund regulations) or otherwise oddly insensitive to their low ROI compared to alternatives. See also the "equity premium puzzle".

lexande
5mo147

Beyond just taking vacation days, if you're a bond trader who believes in a very high chance of xrisk in the next five years it probably makes sense to quit your job and fund your consumption out of your retirement savings. At which point you aren't a bond trader anymore and your beliefs no longer have much impact on bond prices.

lexande
5mo52

From an altruistic point of view, your money can probably do a lot more good in worlds with longer timelines. During an explosive growth period humanity will be so rich that they will likely be fine without our help, whereas if there's a long AI winter there will be a lot of people who still need bednets, protection from biological xrisks, and other philanthropic support. Furthermore in the long-timeline worlds there's a much better chance that your money can actually make a difference in solving AI alignment before AGI is eventually developed. So if anything I think the appropriate altruistic investment approach is the opposite of what this post suggests; even if you think that timelines will be short you should bet that they will be long.

From a personal point of view, it's likewise true that marginal dollars are much more useful to you during an AI winter than during an explosive growth period (when everyone will be pretty rich anyway), so you should make trades that move money from short-timeline futures to long-timeline ones. But I do agree with the post that short timelines should increase your propensity to consume today. (The "borrow today" proposal is impractical since nobody will actually lend you significant amounts of money unsecured, but you might want to spend down savings faster than you otherwise would.)

lexande
5mo142

I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like "transformative AI is coming soon", and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn't otherwise.) But I think you are right that basically no market participants have a probability estimate for transformative AI (or almost anything else) 30 years out; they aren't trying to make predictions that far out and don't expect to do significantly better than noise if they did try.

lexande
5mo2112

A few years ago I asked around among finance and finance-adjacent friends about whether the interest rates on 30 or 50 year government bonds had implications about what the market or its participants believed regarding xrisk or transformative AI, but eventually became convinced that they do not.

As far as I can tell nobody is even particularly trying to predict 30+ years out. My impression is:

  • A typical marginal 30-year bond investor is betting that interest rates will be even lower in 5-10 years, and then they can sell their 30 year bond for a profit since it will have a higher locked-in interest rate than anything being issued then.
  • Lots of market actors have a regulatory obligation (e.g. bank capital requirements) to buy government bonds which drives the interest rate on such bonds down a lot, to the point that it can be significantly negative for long periods even when the market generally expects the economy to grow. Corporate bonds have less of this issue but are almost never issued for such long durations.

It's true that the market clearly doesn't believe in extremely short timelines (like real GDP either doubling or going to zero in the next 5-10 years). But I think it mostly doesn't have beliefs about 30+ years out, or if it does their impacts on prices are swamped by its beliefs about nearer-term stuff.

lexande
5mo2626

Nobody will give you an unsecured loan to fund consumption or donations with most of the money not due for 15+ years; most people in our society who would borrow on such terms would default.  (You can get close with some types of student loan, so if there's education that you'd experience as intrinsically-valued consumption or be able to rapidly apply to philanthropic ends then this post suggests you should perhaps be more willing to borrow to fund it than you would be otherwise, but your personal upside there is pretty limited.)

lexande
1y250

Is there a link to what OpenPhil considers their existing cause areas? The Open Prompt asks for new cause areas so things that you already fund or intend to fund are presumably ineligible, but while the Cause Exploration Prize page gives some examples it doesn't link to a clear list of what all of these are. In a few minutes looking around the Openphilanthropy.org site the lists I could find were either much more general than you're looking for here (lists of thematic areas like "Science for Global Health") or more specific (lists of individual grants awarded) but I may be missing something.

Maybe, though given the unilateralist's curse and other issues of the sort discussed by 80k here I think it might not be good for many people currently on the fence about whether to found EA orgs/megaprojects to do so. There might be a shortage of "good" orgs but that's not necessarily a problem you can solve by throwing founders at it.

It also often seems to me that orgs with the right focus already exist (and founding additional ones with the same focus would just duplicate effort) but are unable to scale up well, and so I suspect "management capacity" is a significant bottleneck for EA. But scaling up organizations is a fundamentally hard problem, and it's entirely normal for companies doing so to see huge decreases in efficiency (which if they're lucky are compensated for by economies of scale elsewhere).

lexande
1y440

the primary constraint has shifted from money to people

This seems like an incorrect or at best misleading description of the situation. EA plausibly now has more money than it knows what to do with (at least if you want to do better than GiveDirectly) but it also has more people than it knows what to do with. Exactly what the primary constraint is now is hard to know confidently or summarise succinctly, but it's pretty clearly neither of those. (80k discusses some of the issues with a "people-constrained" framing here.) In general large-scale problems that can be solved by just throwing money or throwing people at them are the exception and not the rule.

For some cause areas the constraint is plausibly direct workers with some particular set of capabilities. But even most people who want to dedicate their careers to EA could not become effective e.g. AI safety researchers no matter how hard they tried. Indeed merely trying may be negative impact in the typical case due to opportunity cost of interviewers' time etc (even if EV-positive given the information the applicant has). One of the nice things about money is that it basically can't hurt, and indeed arguments about the overhead of managing volunteer/unspecialised labour were part of how we wound up with the donation focus in the first place.

I think there is a large fraction of the population for whom donating remains the most good they can do, focusing on whatever problems are still constrained by money (GiveDirectly if nothing else) because the other problems are constrained by capabilities or resources which they don't personally have or control. The shift from donation focus to direct work focus isn't just increasing demandingness for these people, it's telling them they can't meaningfully contribute at all. Of course inasmuch as it's true that a particular direct work job is more impactful than a very large amount of donations it's important to be open and honest about this so those who actually do have the required capabilities can make the right decisions and tradeoffs. But this is fundamentally in tension with building a functioning and supportive community, because people need to feel like their community won't abandon them if they turn out to be unable to get a direct work job (and this is especially true when a lot of the direct work in question is "hits-based" longshots where failure is the norm). I worry that even people who could potentially have extraordinarily high impact as direct workers might be put off by a community that doesn't seem like it would continue to value them if their direct work plans didn't pan out.

I really enjoyed this post, but have a few issues that make me less concerned about the problem than the conclusion would suggest:

- Your dismissal in section X of the "weight by simplicity" approach seems weak/wrong to me. You treat it as a point against such an approach that one would pay to "rearrange" people from more complex to simpler worlds, but that seems fine actually, since in that frame it's moving people from less likely/common worlds to more likely/common ones.

- I lean towards conceptions of what makes a morally relevant agent (or experience) under which there are only countably many of them. It seems like two people with the exact same full life experience history are the same person, and the same seems plausible for two people whose full-life-experience-histories can't be distinguished by any finite process, in which case each person can be specified by finitely much information and so there are at most countably many of them. I think if you're willing to put 100% credence on some pretty plausible physics you can maybe even get down to finitely many possible morally relevant morally distinct people, since entropy and the speed of light may bound how large a person can be.

- My actual current preferred ethics is essentially "what would I prefer if I were going to be assigned at random to one of the morally-relevant lives ever eventually lived" (biting the resulting "sadistic conclusion"-flavoured bullets). For infinite populations this requires that I have some measure on the population, and if I have to choose the measure arbitrarily then I'm subject to most of the criticisms in this post. However I believe the infinite cosmology hypotheses referenced generally come along with fundamental measures? Indeed a measure over all the people one might be seems like it might be necessary for a hypothesis that purports to describe the universe in which we in fact find ourselves. If I have to dismiss hypotheticals that don't provide me with a measure on the population as ill-formed and assign zero credence to universes without a fundamental measure that's a point against my approach but I think not a fatal one.

Load more