Zach Stein-Perlman

Undergraduate and EA organizer at Williams. Prospective longtermist researcher. AI Impacts intern June–September 2022. Elections junkie.

In the rest of 2022, I'm Ann Arbor (–June), NYC (June), Berkeley (June–September), and Oxford (September–). Please reach out if you're nearby or if we might have overlapping interests!

Some things I'd be excited to talk about:

  • What happens after an intelligence explosion
  • What happens if most people appreciate AI
  • International relations in the context of powerful AI
  • Policy responses to AI — what's likely to happen and what would be good

Topic Contributions

Comments

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

I agree with you that the limit of the EV of "bet until you win n times" is infinite as n→∞. But I agree with Guy Raveh that we probably can't just take this limit and call it the EV of "always bet." Maybe it depends on what precise question we're asking...

(parent comment edited)

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

[Parent comment deleted, so I integrated this comment into its grandparent]

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

Well, what value do we assign a scenario in which you're still talking with the demon? If those get value 0, then sure, the EV is 0. But calling those scenarios 0 seems problematic, I think.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

The expected value is, in fact, defined, and it is zero.

Is the random variable you're thinking of, whose expectation is zero, just the random variable that's uniformly zero? That doesn't seem to me to be the right way to describe the "bet" strategy; I would prefer to say the random variable is undefined. (But calling it zero certainly doesn't seem to be a crazy convention.)

AABoyles's Shortform

If there's at least a 1% chance that we don't experience catastrophe soon, and we can have reasonable expected influence over no-catastrophe-soon futures, and there's a reasonable chance that such futures have astronomical importance, then patient philanthropy is quite good in expectation. Given my empirical beliefs, it's much better then GiveDirectly. And that's just a lower bound; e.g., investing in movement-building might well be even better.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

Short answer: yes.

Medium answer: observe that the value after winning n bets is at least 10×2^n, and the limit of (215/216)^n × 10×2^n as n→∞ is undefined (infinite). Or let ω be an infinite number and observe that the infinitesimal probability (215/216)^ω times the infinite value 10×2^ω is 10×(2×215/216)^ω which is infinite. (Note also that the EVs of the strategy "bet with probability p" goes to ∞ as p→1.)

Edit: hmm, in response to comments, I'd rephrase as follows.

Yes, the "always bet" strategy has value 0 with probability 1. But if a random variable is 0 with probability measure 1 and is undefined with probability measure 0, we can't just say it's identical to the zero random variable or that it has expected value zero (I think, happy to be corrected with a link to a math source). And while 'infinite bets' doesn't really make sense, if we have to think of 'the state after infinite bets' I think we could describe its value with the aforementioned random variable.

While you will never actually create a world with this strategy, I don't think the expected value is defined because 'after infinite bets' you could still be talking to the demon (with probability 0, but still possibly, and talking-with-the-demon-after-winning-infinitely-many-bets seems significant even with probability 0).

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

You could say, that the strategy “always take the demons bet” has an expected value of 0 QUALYs

  • This is not true. The expected value of this strategy is undefined. [Edit: commenters reasonably disagree.]

  • So maybe we want to keep our normal expected-utility-maximizing behavior for some nicely-behaved prospects, where "nicely-behaved" includes conditions like "expected values are defined," and accept that we might have to throw up our hands otherwise.

  • That said, I agree that thought experiments like this give us at least some reason to disfavor simple expected-utility-maximizing, but I caution against the jump to expected-utility-maximizing is wrong since (in my opinion) other (non-nihilistic) theories are even worse. It's easy to criticize a particular theory; if you offered a particular alternative, there would be strong reasons against that theory.

  • Regardless, note that you can have a bounded utility function and be a longtermist, depending on the specifics of the theory. And it's prima facie the case that (say) x-risk reduction is very good even if you only care about (say) the next billion years.

  • (QUALY should be QALY)

Ben Garfinkel's Shortform

Related:

The uncertainty and error-proneness of our first-order assessments of risk is itself something we must factor into our all-things-considered probability assignments. This factor often dominates in low-probability, high- consequence risks—especially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or that are difficult to assess for other reasons. Suppose that some scientific analysis A indicates that some catastrophe X has an extremely small probability P(X) of occurring. Then the probability that A has some hidden crucial flaw may easily be much greater than P(X). Furthermore, the conditional probability of X given that A is crucially flawed, P(X|¬A), may be fairly high. We may then find that most of the risk of X resides in the uncertainty of our scientific assessment that P(X) was small.

(source)

Should U.S. donors give to an EA-focused PAC like Guarding Against Pandemics instead of individual campaigns?

Oh, this seems like an excellent point. I'll try to learn more but in the meantime you changed my mind. I'll edit the parent comment.

Also, for the how-the-PAC-supports-candidates question, it would be useful to know what specific kind of PAC the GAP PAC is. (A "multi-candidate PAC"?) I didn't find this quickly on Google but surely it’s public.

Should U.S. donors give to an EA-focused PAC like Guarding Against Pandemics instead of individual campaigns?

EDIT: probably, in general. Direct donations are better for electing candidates, but donations to a PAC like GAP's are better for influencing them, and the latter is generally more tractable.


Probably not, particularly if you're interested enough to research individual candidates.

(1) As a member of the GAP team recently noted, it's significantly better for candidates to get a dollar of direct donations than a dollar of PAC support.

(2) GAP is nonpartisan, with good reason; but insofar as you have reason to believe that electing officials from one party is better than the other, you should avoid supporting the other party in competitive general elections.

(1 is a much bigger reason than 2, and as a quick lower bound on effectiveness, just donating to a random GAP endorsee would be better in expectation than donating to the GAP PAC.)

(Diversification considerations are minimal on the scale of an individual's contributions, since on the scale of individual contributions, the last dollar you donate to a candidate is almost as effective as the first dollar you donate. See, e.g., Giving Your All.)

(Also note that most of GAP's endorsees are not "EA-aligned," they're just more anti-pandemic than most.)

Load More