D

davidc

174 karmaJoined

Posts
2

Sorted by New

Comments
41

I don't have much confidence in how AI will go, so this is very speculative, but one consideration for personal planning that I think about:

If AI does become as powerful as some hope (and doesn't kill us all), then maybe your personal situation (money, power) at a particular crucial point will be very important. Examples:

  • are you still alive when crucial health advances come that could keep you alive much longer?
  • can you afford those crucial health advances? (for yourself and/or loved ones)
  • are you still alive when technology to "upload" your mind works well, and can you afford it?
  • is there going to be some future grab for resources at a crucial time (before or after uploading...), and will you be in a good position for that?
    • hard for me to speculate about what those resources are, but for a probably-quite-silly example: Maybe we'll auction off whole solar systems?

How you answer these questions could affect whether you live for the next million years, and what that life is like. I see those as reasons to prioritize personal health, money, and power more than you would otherwise.

Note: I'm not actually living my life according to this prescription. If I had to answer why, I think it's partly that I think probably AI progress will stall out before creating such scientific/tech breakthroughs that allow for uploading minds. But even a small chance could be worth optimizing for, so I'm not sure I'm being rational about this.

(This is about personal planning, but sort of parallels some EA considerations, like "value lock-in".)

This seems mostly reasonable, but also seems like it has some unstated (rare!) exceptions that maybe seem too obvious to state, but that I think would be good to state anyway.

E.g. if you already have reason to believe an organization isn't engaging in good faith, or is inclined to take retribution, then giving them more time to plan that response doesn't necessarily make sense.

Maybe some other less extreme examples along the same lines.

I wouldn't be writing this comment if the language in the post hedged a bit more / left more room for exceptions, but reading a sentence like this makes me want to talk about exceptions:

When posting critical things publicly, however, unless it's very time-sensitive we should be letting orgs review a draft first.

We can't sustain current growth levels

Is this about GDP growth or something else? Sustaining 2% GDP growth for a century (or a few) seems reasonably plausible?

Not quite the same question but I believe ACE started as one of the CEA children but is a separate entity now.

It still doesn't fully entail Matt's claim, but the content of the interview gets a lot closer than that description. You don't need to give it a full listen, I've quoted the relevant part:

https://forum.effectivealtruism.org/posts/THgezaPxhvoizkRFy/clarifications-on-diminishing-returns-and-risk-aversion-in?commentId=ppyzWLuhkuRJCifsx

When I listened to the interview, I briefly thought to myself that that level of risk-neutrality didn't make sense. But I didn't say anything about that to anyone, and I'm pretty sure I also didn't play through in my head anything about the actual implications if Sam were serious about it.

I wonder if we could have taken that as a red flag. If you take seriously what he said, it's pretty concerning (implies a high chance of losing everything, though not necessarily anything like what actually happened)!

Seems worthwhile to quote the relevant bit of the interview:

====

Sam Bankman-Fried: If your goal is to have impact on the world — and in particular if your goal is to maximize the amount of impact that you have on the world — that has pretty strong implications for what you end up doing. Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore. But how about money? Are you able to donate so much that money doesn’t matter anymore? And the answer is, I don’t exactly know. But you’re thinking about the scale of the world there, right? At what point are you out of ways for the world to spend money to change?

Sam Bankman-Fried: There’s eight billion people. Government budgets run in the tens of trillions per year. It’s a really massive scale. You take one disease, and that’s a billion a year to help mitigate the effects of one tropical disease. So it’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money. I think that’s actually a really powerful fact. That means that you should be pretty aggressive with what you’re doing, and really trying to hit home runs rather than just have some impact — because the upside is just absolutely enormous.

Rob Wiblin: Yeah. Our instincts about how much risk to take on are trained on the fact that in day-to-day life, the upside for us as individuals is super limited. Even if you become a millionaire, there’s just only so much incrementally better that your life is going to be — and getting wiped out is very bad by contrast.

Rob Wiblin: But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral. As an individual, to make a bet where it’s like, “I’m going to gamble my $10 billion and either get $20 billion or $0, with equal probability” would be madness. But from an altruistic point of view, it’s not so crazy. Maybe that’s an even bet, but you should be much more open to making radical gambles like that.

Sam Bankman-Fried: Completely agree. ...

Load more