RRY

Ross Rheingans-Yoo

208 karmaJoined Dec 2020

Comments
14

Makes total sense not to invest in the charitable side -- I'm generally off a similar mind.[1] The reason I'm curious is that "consider it as two separate accounts" is the most-compelling argument I've seen against tithing investment gains. (The argument is basically, that if both accounts were fully-invested, then tithing gains from the personal account to the charity account leads to a total 4:1 ratio between them as withdrawal_time -> ∞, not a 9:1 ratio.[2] Then, why does distribution out of the charity account affect the 'right' additional amount to give out of the personal account?)

Another way to count it is, if you believe that the returns on effective charity  are greater than private investments returns  and so always make donations asap, then tithing  at the start and  after  years is worse for both accounts than just giving say  up-front (and giving  of the further investment gains).

Probably this is most relevant to startup employees, who might receive "$100,000 in equity" that they only can sell when it later exits for, say, 10x that. Should a 10% pledge mean $10,000 up-front and $90,000 of the exit (10% when paid + 10% of gains), or just $100,000 of the exit (10% went to the charity account, then exited)?[3]

(Sorry, don't mean to jump on your personal post with this tangent -- am happy to chat if you find this interesting to think about, but also can write my own post about it on my own time if not

  1. ^

    The one case where I do think investment can sense is where I want to direct the funding to accelerating the program of a for-profit company, eg in biotech, and the right way to do so is via direct investment. I do think there are such cases that can be on the frontier of most-effective in EV terms (and for them I only count it as effective giving if I precommit to re-giving any proceeds, without re-counting it as a donation for pledge purposes).

  2. ^

    Consider receiving $1,000 in salary, splitting it $100 : $900 between the accounts, investing each so they grow 10x and become $1,000 : $9,000, then realizing the personal investment gains and tithing $800 on them. Now the accounts are $1,800 : $8,200, which seems a lot more like "giving 18%" than "giving 10%"!

  3. ^

    If the correct baseline is "10% of the exit", should this be any different from the case of a salary worker who makes the $100,000 in cash and puts it in an index fund until it [10x]s? Or what about a professional trader who "realizes gains" frequently with daily trading, but doesn't take any of the money out until after many iterations?

  1. Thought-provoking post; thanks for sharing!

  2. A bit of a tangential point, but I'm curious, because it's something I've also considered:

putting 10% of my paycheck directly in a second account which was exclusively for charity

What do you do with investment income? It's pretty intuitive that if you're "investing to give" and you have $9,000 of personal savings and $1,000 of donation-investments and they both go up 10% over a year, that you should have $9,900 of personal savings and $1,100 of donation-investments. But what would you (or do you) do differently if you put the money into the accounts, donated half of the charity account, and then ended up with $9,900 in personal savings (a $900 annual gain) and $550 in savings-for-giving (a $50 annual gain)?

I have heard at least three different suggestions for how to do this sort of accounting, but am curious what you go with, since the rest of your perspective self seems fairly intentional and considered!

I'd argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.

I think "act according to that mean probability" is wrong for many important decisions you might want to take - analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional comment if that is what you meant though and were just using shorthand for that position.

Clarifying, I do agree that there are some situations where you need something other than a subjective p(risk) to compare EV(value|action A) with EV(value|action B). I don't actually know how to construct a clear analogy from the 1.97-legged trousers example if the variable we're meaning is probabilities (though I agree that there are non-analogous examples; VOI for example).


I'll go further, though, and claim that what really matters is what worlds the risk is distributed over, and that expanding the point-estimate probability to a distribution of probabilities, by itself, doesn't add any real value. If it is to be a valuable exercise, you have to be careful what you're expanding and what you're refusing to expand.

More concretely, you want to be expanding over things your intervention won't control, and then asking about your intervention's effect at each point in things-you-won't-control-space, then integrating back together. If you expand over any axis of uncertainty, then not only is there a multiplicity of valid expansions, but the natural interpretation will be misleading.

For example, say we have a 10% chance of drawing a dangerous ball from a series of urns, and 90% chance of drawing a safe one. If we describe it as (1) "50% chance of 9.9% risk, 50% chance of 10.1% risk" or (2) "50% chance of 19% risk, 50% chance of 1% risk" or (3) "10% chance of 99.1% risk, 90% chance of 0.1% risk", what does it change our opinion of <intervention A>? (You can, of course, construct a two-step ball-drawing procedure that produces any of these distributions-over-probabilities.)

I think the natural intuition is that interventions are best in (2), because most probabilities of risk are middle-ish, and worst in (3), because probability of risk is near-determined. And this, I think, is analogous to the argument of the post that anti-AI-risk interventions are less valuable than the point-estimate probability would indicate.

But that argument assumes (and requires) that our interventions can only chance the second ball-drawing step, and not the first. So using that argument requires that, in the first place, we sliced the distribution up over things we couldn't control. (If that is the thing we can control with our intervention, then interventions are best in the world of (3).)


Back to the argument of the original post: You're deriving a distribution over several p(X|Y) parameters from expert surveys, and so the bottom-line distribution over total probabilities reflects the uncertainty in experts' opinions on those conditional probabilities. Is it right to model our potential interventions as influencing the resolution of particular p(X|Y) rolls, or as influencing the distribution of p(X|Y) at a particular stage?

I claim it's possible to argue either side.

Maybe a question like "p(much harder to build aligned than misaligned AGI | strong incentives to build AGI systems)" (the second survey question) is split between a quarter of the experts saying ~0% and three-quarters of the experts saying ~100%. (This extremizes the example, to sharpen the hypothetical analysis.) We interpret this as saying there's a one-quarter chance we're ~perfectly safe and a three-quarters chance that it's hopeless to develop and aligned AGI instead of a misaligned one.

If we interpret that as if God will roll a die and put us in the "much harder" world with three-quarters probability and the "not much harder" world with one-quarters probability, then maybe our work to increase the we get an aligned AGI is low-value, because it's unlikely to move either the ~0% or ~100% much lower (and we can't change the die). If this was the only stage, then maybe all of working on AGI risk is worthless.

But "three-quarter chance it's hopeless" is also consistent with a scenario where there's a three-quarters chance that AGI development will be available to anyone, and many low-resourced actors will not have alignment teams and find it ~impossible to develop with alignment, but a one-quarter chance that AGI development will be available only to well-resourced actors, who will find it trivial to add on an alignment team and develop alignment. But then working on AGI risk might not be worthless, since we can work on increasing the chance that AGI development is only available to actors with alignment teams.

I claim that it isn't clear, from the survey results, whether the distribution of experts' probabilities for each step reflect something more like the God-rolls-a-die model, or different opinions about the default path of a thing we can intervene on. And if that's not clear, then it's not clear what to do with the distribution-over-probabilities from the main results. Probably they're a step forward in our collective understanding, but I don't think you can conclude from the high chances of low risk that there's a low value to working on risk mitigation.

I agree that geomean-of-odds performs better than geomean-of-probs!

I still think it has issues for converting your beliefs to actions, but I collected that discussion under a cousin comment here: https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future?commentId=9LxG3WDa4QkLhT36r

An explicit case where I think it's important to arithmean over your subjective distribution of beliefs:

  • coin A is fair
  • coin B is either 2% heads or 98% heads, you don't know
  • you lose if either comes up tails.

So your p(win) is "either 1% or 49%".

I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I'm ignoring here the opportunity to seek value of information, which could be significant!).

It's important not to use geomean-of-odds to produce your actions in this scenario; that gives you ~9.85%, and would imply you should avoid the +$80;-$20 button, which I claim is the wrong choice.

Thanks for clarifying "geomean of probabilities" versus "geomean of odds elsethread. I agree that that resolves some (but not all) of my concerns with geomeaning.

I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds - I think they ought to stop putting so much emphasis on summary statistics altogether.

I agree with your pro-distribution position here, but I think you will be pleasantly surprised by how much reasoning over distributions goes into cost-benefit estimates at the Future Fund. This claim is based on nonpublic information, though, as those estimates have not yet been put up for public discussion. I will suggest, though, that it's not an accident that Leopold Aschenbrenner is talking with QURI about improvements to Squiggle: https://github.com/quantified-uncertainty/squiggle/discussions

So my subjective take is that if the true issue is "you should reason over distributions of core parameters", then in fact there's little disagreement between you and the FF judges (which is good!), but it all adds up to normality (which is bad for the claim "moving to reasoning over distributions should move your subjective probabilities").

If we're focusing on the Worldview Prize question as posed ("should these probability estimates change?"), then I think the geo-vs-arith difference is totally cruxy -- note that the arithmetic summary of your results (9.65%) is in line with the product of the baseline subjective probabilities for the prize (something like a 3% for loss-of-control x-risk before 2043; something like 9% before 2100).

I do think it's reasonable to critique the fact that those point probabilities are presented without any indication that the path of reasoning goes through reasoning over distributions, though. So I personally am happy with this post calling attention to distributional reasoning, since it's unclear in this case whether that is an update. I just don't expect it to win the prizes for changing estimates.


Because I do think distributional reasoning is important, though, I do want to zoom in on the arith-vs-geo question (which I think, on reflection, is subtler than the position I took in my top-level comment). Rather than being a minor detail, I think this is important because it influences whether greater uncertainty tends to raise or lower our "fair betting odds" (which, at the end of the day, are the numbers that matter for how the FF decides to spend money).

I agree with Jamie and you and Linch that when pooling forecasts, it's reasonable (maybe optimal? maybe not?) to use geomeans. So if you're pooling expert forecasts of {1:1000, 1:100, 1:10}, you might have a subjective belief of something like "1:100, but with a 'standard deviation' of 6.5x to either side". This is lower than the arithmean-pooled summary stats, and I think that's directionally right.

I think this is an importantly different question from "how should you act when your subjective belief is a distribution like that. I think that if you have a subjective belief like "1%, but with a 'standard deviation' of 6.5x to either side", you should push a button that gives you $98.8 if you're right and loses $1.2 if you're wrong. In particular, I think you should take the arithmean over your subjective distribution of beliefs (here, ~1.4%) and take bets that are good relative to that number. This will lead to decision-relevant effective probabilities that are higher than geomean-pooled point estimates (for small probabilities).

If you're combining multiple case parameters multiplicatively, then the arith>geo effect compounds as you introduce uncertainty in more places -- if the quantity of interest is x*y, where x and y each had expert estimates of {1:1000, 1:100, 1:10} that we assume independent, then arithmean(x*y) is about twice geomean(x*y). Here's a quick Squiggle showing what I mean: https://www.squiggle-language.com/playground/#code=eNqrVirOyC8PLs3NTSyqVLIqKSpN1QELuaZkluQXwUQy8zJLMhNzggtLM9PTc1KDS4oy89KVrJQqFGwVcvLT8%2FKLchNzNIAsDQM9A0NNHQ0jfWPNOAM9U82YvJi8SqJUVQFVVShoKVQCsaGBQUyeUi0A3tIyEg%3D%3D

For this use-case (eg, "what bets should we make with our money"), I'd argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.

In the case of the Worldview Prize, my interpretation is that the prize is described and judged in terms of (3), because that is the most directly valuable thing in terms of producing better (4)s.

It sounds like the headline claim is that (A) we are 33.2% to live in a world where the risk of loss-of-control catastrophe is <1%, and 7.6% to live in a world where the risk is >35%, and a whole distribution of values between, and (B) that it follows from A that the correct subjective probability of loss-of-control catastrophe is given by the geometric mean of the risk, over possible worlds.

The ‘headline’ result from this analysis is that the geometric mean of all synthetic forecasts of the future is that the Community’s current best guess for the risk of AI catastrophe due to an out-of-control AGI is around 1.6%. You could argue the toss about whether this means that the most reliable ‘fair betting odds’ are 1.6% or not (Future Fund are slightly unclear about whether they’d bet on simple mean, median etc and both of these figures are higher than the geometric mean[9]).

I want to argue that the geometric mean is not an appropriate way of aggregating probabilities across different "worlds we might live in" into a subjective probability (as requested by the prize). This argument doesn't touch on the essay's main argument in favor of considering distributions, but may move the headline subjective probability that it suggests to 9.65%, effectively outside the range of opinion-change prizes, so I thought it worth clarifying in case I misunderstand.


Consider an experiment where you flip a fair coin A. If A is heads you flip a 99%heads coin B; if A is tails you flip a 1%heads coin B. We're interested in forming a subjective probability that B is heads.

The answer I find intuitive for p(B=heads) is 50%, which is achieved by taking the arithmetic average over worlds. The geometric average over worlds gives 9.9% instead, which doesn't seem like "fair betting odds" for B being heads under any natural interpretation of those words. What's worse, the geometric-mean methodology suggests a 9.9% subjective probability of tails, and then p(H)+p(T) does not add to 1.

(If you're willing to accept probabilities that are 0 and 1, then an even starker experiment is given by a 1% chance to end up in a world with 0% risk and a 99% chance to end up in a world with 100% risk -- the geometric mean is 0.)


Footnote 9 of the post suggests that the operative meaning of "fair betting odds" is sufficiently undefined by the prize announcement that perhaps it refers to a Brier-score bet, but I believe that it is clear from the prize announcement that a X bet is the kind under consideration. The prize announcement's footnote 1 says "We will pose many of these beliefs in terms of <u>subjective probabilities, which represent betting odds</u> that we consider fair in the sense that we’d be roughly indifferent between betting in favor of the relevant propositions <u>at those odds</u> or betting against them."

I don't know of a natural meaning of "bet in favor of P at 97:3 odds" other than "bet to win $97N if P and lose $3N if not P", which the bettor should be indifferent about if . Is there some other bet that you believe "bet in favor of P at odds of X:Y" could mean? In particular, is there a meaning which would support forming odds (and subjective probability) according to a geometric mean over worlds?

(I work at the FTX Foundation, but have no connection to the prizes or their judging, and my question-asking here is as a EA Forum user, not in any capacity connected to the prizes.)

To qualify, please please publish your work (or publish a post linking to it) on the Effective Altruism Forum, the AI Alignment Forum, or LessWrong with a "Future Fund worldview prize" tag. You can also participate in the contest by publishing your submission somewhere else (e.g. arXiv or your blog) and filling out this submission form. We will then linkpost/crosspost to your submission on the EA Forum.

I think it would be nicer if you say your P(Doom|AGI in 2070) instead of P(Doom|AGI by 2070), because the second one implicitly takes into account your timelines.

I disagree. (At least, if defining "nicer" as "more useful to the stated goals for the prizes".)

As an interested observer, I think it's an advantage to take timelines into account. Specifically, I think the most compelling way to argue for a particular P(Catastrophe|AGI by 20__) to the FF prize evaluators will be:

  • states and argues for a timelines distribution in terms of P(AGI in 20__) for a continuous range of 20__s
  • states and argues for a conditional-catastrophe function in terms of P(Catastrophe|AGI in 20__) over the range
  • integrates the product over the range to get a P(Catastrophe|AGI by 20__)
  • argues that the final number isn't excessively sensitive to small shifts in the timelines distribution or the catastrophe-conditional-on-year function.

An argument which does all of this successfully is significantly more useful to informing the FF's actions than an argument which only defends a single P(Catastrophe|20__).

I do agree that it would be nice to have the years line up, but as above I do expect a winning argument for P(Catastrophe|AGI by 2070) to more-or-less explicitly inform a P(Catastrophe|AGI by 2043), so I don't expect a huge loss.

(Not speaking for the prizes organizers/evaluators, just for myself.)

are there candidate interventions that only require mobile equipment and not (semi-)permanent changes to buildings?

Fortunately, yes. Within-room UVC (upper-room 254nm and lower-room 222nm) can be provided by mobile lights on tripod stands.

This is what the JHU Center for Health Security did for their IAQ conference last month. (Pictures at https://twitter.com/DrNikkiTeran/status/1567864920087138304 )

Load more