All of Ross Rheingans-Yoo's Comments + Replies

Makes total sense not to invest in the charitable side -- I'm generally off a similar mind.[1] The reason I'm curious is that "consider it as two separate accounts" is the most-compelling argument I've seen against tithing investment gains. (The argument is basically, that if both accounts were fully-invested, then tithing gains from the personal account to the charity account leads to a total 4:1 ratio between them as withdrawal_time -> ∞, not a 9:1 ratio.[2] Then, why does distribution out of the charity account affect the 'right' additional... (read more)

3
Davidmanheim
5mo
This is a super interesting point, and I'm completely unsure what it should imply for what I actually do, especially since returns are uncertain and prepaying at a discount under possible bankruptcy / extinction risk at an uncertain rate is hard - all of which (probably unfortunately) means I'm just going to keep doing the naive thing I've done so far.
  1. Thought-provoking post; thanks for sharing!

  2. A bit of a tangential point, but I'm curious, because it's something I've also considered:

putting 10% of my paycheck directly in a second account which was exclusively for charity

What do you do with investment income? It's pretty intuitive that if you're "investing to give" and you have $9,000 of personal savings and $1,000 of donation-investments and they both go up 10% over a year, that you should have $9,900 of personal savings and $1,100 of donation-investments. But what would you (or do you) do dif... (read more)

3
Davidmanheim
5mo
That's a really interesting question, but I don't invest my charitable giving, though I do tithe my investment income, once gains are realized. My personal best guess is that in non-extinction scenarios, humanity's wealth increases in the long-term, and opportunities to do good should in general become more expensive, so it's better to put money towards the present.

I'd argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.

I think "act according to that mean probability" is wrong for many important decisions you might want to take - analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional

... (read more)

I agree that geomean-of-odds performs better than geomean-of-probs!

I still think it has issues for converting your beliefs to actions, but I collected that discussion under a cousin comment here: https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future?commentId=9LxG3WDa4QkLhT36r

An explicit case where I think it's important to arithmean over your subjective distribution of beliefs:

  • coin A is fair
  • coin B is either 2% heads or 98% heads, you don't know
  • you lose if either comes up tails.

So your p(win) is "either 1% or 49%".

I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I'm ignoring here the opportunity to seek value of information, which could be significant!).

It's important not to use geomean-of-odds to produce your action... (read more)

Thanks for clarifying "geomean of probabilities" versus "geomean of odds elsethread. I agree that that resolves some (but not all) of my concerns with geomeaning.

I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds - I think they ought to stop putting so much emphasis on summary statistics altogether.

I agree with your pro-distribution position here, but I think you will be pleasantly surprised by how much reasoning over distributions goes into cost-benefit estimates at the Future ... (read more)

2
Froolow
2y
I agree that the arith-vs-geo question is basically the crux when it comes to whether this essay should move FF's 'fair betting probabilities' - it sounds like everyone is pretty happy with the point about distributions and I'm really pleased about that because it was the main point I was trying to get across. I'm even more pleased that there is background work going on in the analysis of uncertainty space, because that's an area where public statements by AI Risk organisations have sometimes lagged behind the state of the art in other risk management applications.  With respect to the crux, I hate to say it - because I'd love to be able to make as robust a claim for the prize as possible - but I'm not sure there is a principled reason for using geomean over arithmean for this application (or vice versa). The way I view it, they are both just snapshots of what is 'really' going on, which is the full distribution of possible outcomes given in the graphs / model. By analogy, I would be very suspicious of someone who always argued the arithmean would be a better estimate of central tendency than the median for every dataset / use case! I agree with you the problem of which is best for this particular dataset / use case is subtle, and I think I would characterise it as being a question of whether my manipulations of people's forecasts have retained some essential 'forecast-y' characteristic which means geomean is more appropriate for various features it has, or whether they have been processed into having some sort of 'outcome-y' characteristic in which case arithmean is more appropriate. I take your point below in the coin example and the obvious superiority of arithmeans for that application, but my interpretation is that the FF didn't intend for the 'fair betting odds' position to limit discussion about alternate ways to think about probabilities ("Applicants need not agree with or use our same conception of probability") However, to be absolutely clear, even if ge
3
Ross Rheingans-Yoo
2y
An explicit case where I think it's important to arithmean over your subjective distribution of beliefs: * coin A is fair * coin B is either 2% heads or 98% heads, you don't know * you lose if either comes up tails. So your p(win) is "either 1% or 49%". I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I'm ignoring here the opportunity to seek value of information, which could be significant!). It's important not to use geomean-of-odds to produce your actions in this scenario; that gives you ~9.85%, and would imply you should avoid the +$80;-$20 button, which I claim is the wrong choice.

It sounds like the headline claim is that (A) we are 33.2% to live in a world where the risk of loss-of-control catastrophe is <1%, and 7.6% to live in a world where the risk is >35%, and a whole distribution of values between, and (B) that it follows from A that the correct subjective probability of loss-of-control catastrophe is given by the geometric mean of the risk, over possible worlds.

The ‘headline’ result from this analysis is that the geometric mean of all synthetic forecasts of the future is that the Community’s current best guess for the

... (read more)
3
Froolow
2y
I think there are good reasons for preferring geometric mean of odds to simple mean when presenting data of this type, but not good enough that I'd take to the barricades over them. Linch (below) links to the same post I do in giving my reasons to believe this. Overall, however, this is an essay about distributions rather than point estimates so if your main objection is to the summary statistic I used then I think we agree on the material points, but have a disagreement about how the work should be presented.  On the point about betting odds, I note that the contest announcement also states "Applicants need not agree with or use our same conception of probability". I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds - I think they ought to stop putting so much emphasis on summary statistics altogether.

Hmm I accidentally deleted a comment earlier, but roughly:

I think there's decent theoretical and empirical arguments for having a prior where you should be using geometric mean of odds over arithmetic mean of probabilities when aggregating forecasts. Jaime has a primer here. However there was some pushback in the comments, especially by Toby Ord. My general takeaway is that geometric mean of odds is a good default when aggregating forecasts by epistemic peers but there are a number of exceptions where some other aggregation schema is better.

Arguably Froolo... (read more)

To qualify, please please publish your work (or publish a post linking to it) on the Effective Altruism Forum, the AI Alignment Forum, or LessWrong with a "Future Fund worldview prize" tag. You can also participate in the contest by publishing your submission somewhere else (e.g. arXiv or your blog) and filling out this submission form. We will then linkpost/crosspost to your submission on the EA Forum.

1
cveres
2y
But it is not possible to tag the post with "Future Fund worldview prize". It seems to me that only existing tags can be used.

I think it would be nicer if you say your P(Doom|AGI in 2070) instead of P(Doom|AGI by 2070), because the second one implicitly takes into account your timelines.

I disagree. (At least, if defining "nicer" as "more useful to the stated goals for the prizes".)

As an interested observer, I think it's an advantage to take timelines into account. Specifically, I think the most compelling way to argue for a particular P(Catastrophe|AGI by 20__) to the FF prize evaluators will be:

  • states and argues for a timelines distribution in terms of P(AGI in 20__) for a
... (read more)

are there candidate interventions that only require mobile equipment and not (semi-)permanent changes to buildings?

Fortunately, yes. Within-room UVC (upper-room 254nm and lower-room 222nm) can be provided by mobile lights on tripod stands.

This is what the JHU Center for Health Security did for their IAQ conference last month. (Pictures at https://twitter.com/DrNikkiTeran/status/1567864920087138304 )

(Speaking for myself and not my employer.)

US tax law requires that US citizens pay income tax and capital gains tax, regardless of their physical/legal residency. Some limited deductions apply, but don't change the basic story.

Are you proposing to bite the bullet on the $100/hr card charge scenario by the $50/hr staffer (paid "$47.5/hr plus perks" at the EA org)?

"Market rate" of $50/hr for labor netting $500/hr of value seems well within the distribution I'd expect (not to mention that EA orgs might value that work even more than any org in industry ever will, perhaps because we're counting the consumer surplus and un-capturable externalities and the industry employer won't).

7
Hauke Hillebrandt
2y
Great question - prompted me to think more about this problem.  I maintain that the most elegant solution might be for EA orgs to pay slightly below market-rate (with a progressive element). But I’m quite uncertain about this and I'd love for people to think more about optimal compensation at EA orgs. Some more thoughts on this: * I very much agree with the central argument made here that we should not have EAs live with a poverty mindset and sweat the small stuff. I think it’s a very big problem that creates a lot of lost utility. I also think a behavioral economics angle might make sense here (many people might irrationally be too frugal to increase their productivity). * My point was not about the absolute level of pay for a given position, which maybe should be higher. Concretely, we can still pay $100/h for an office assistant, but this will inevitably attract better candidates. This should take care of a lot of ‘card charge scenarios’ (e.g. I saw a job ad for high impact PA at ~$30/h recently, and ACE CEO for less than $100k/y, which seems low). * It’s also not about the absolute level of funding for orgs, which should maybe also be higher. In other words, we might want to hire 2 office assistants for 40h/week rather than 1 person for 80h /week, especially at lower salary levels. This way, a person on $50/h or $100k/y can deal w/ a $100 card charge, but only after a 40h work week,  where they can add 10h of life admin that is worth more than their salary. This is theoretically equivalent to someone doing part-time EtG when it’s above their salary level (with incentives neatly aligned i.e. they’ll know best what life admin to outsource). At scale, the advantages of division of labor from focusing more on one's job might not outweigh diminishing returns to increasing hours spent at the office. * Note that you have card charge-like scenarios even if you pay above market rates. But even if you only pay $99k/y for people whose market rate is $100k/y, this wi

(I'm a trader at a NY-based quant firm, and work on education and training for new traders, among other things.)

I'm nearly certain that your hiring manager (or anyone involved in hiring you) would be happy to receive literally this question from you, and would have advice specifically tailored to the firm you're joining.

The firm has very a strong interest in your success (likely more so than anyone you've interacted with in college), and they've already already committed to spending substantial resources to helping you prepare for a successful career as a ... (read more)

Finally, I expect that my earmarking of grant funds will be partially funged within the GFI organization, and I think this is inevitable, basically fine, and in fact weakly good.

I received a private request (from an early reviewer of this post) to expand on my thoughts here, so a few more words:

When making decisions under collective uncertainty, aggregating information is a hard problem (citation not required). I think that my relative opinions here push the world towards a more efficient allocation, but I recognize that my opinions about GFI are i... (read more)