Thought-provoking post; thanks for sharing!
A bit of a tangential point, but I'm curious, because it's something I've also considered:
putting 10% of my paycheck directly in a second account which was exclusively for charity
What do you do with investment income? It's pretty intuitive that if you're "investing to give" and you have $9,000 of personal savings and $1,000 of donation-investments and they both go up 10% over a year, that you should have $9,900 of personal savings and $1,100 of donation-investments. But what would you (or do you) do dif...
I'd argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.
...I think "act according to that mean probability" is wrong for many important decisions you might want to take - analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional
I agree that geomean-of-odds performs better than geomean-of-probs!
I still think it has issues for converting your beliefs to actions, but I collected that discussion under a cousin comment here: https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future?commentId=9LxG3WDa4QkLhT36r
An explicit case where I think it's important to arithmean over your subjective distribution of beliefs:
So your p(win) is "either 1% or 49%".
I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I'm ignoring here the opportunity to seek value of information, which could be significant!).
It's important not to use geomean-of-odds to produce your action...
Thanks for clarifying "geomean of probabilities" versus "geomean of odds elsethread. I agree that that resolves some (but not all) of my concerns with geomeaning.
I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds - I think they ought to stop putting so much emphasis on summary statistics altogether.
I agree with your pro-distribution position here, but I think you will be pleasantly surprised by how much reasoning over distributions goes into cost-benefit estimates at the Future ...
It sounds like the headline claim is that (A) we are 33.2% to live in a world where the risk of loss-of-control catastrophe is <1%, and 7.6% to live in a world where the risk is >35%, and a whole distribution of values between, and (B) that it follows from A that the correct subjective probability of loss-of-control catastrophe is given by the geometric mean of the risk, over possible worlds.
...The ‘headline’ result from this analysis is that the geometric mean of all synthetic forecasts of the future is that the Community’s current best guess for the
Hmm I accidentally deleted a comment earlier, but roughly:
I think there's decent theoretical and empirical arguments for having a prior where you should be using geometric mean of odds over arithmetic mean of probabilities when aggregating forecasts. Jaime has a primer here. However there was some pushback in the comments, especially by Toby Ord. My general takeaway is that geometric mean of odds is a good default when aggregating forecasts by epistemic peers but there are a number of exceptions where some other aggregation schema is better.
Arguably Froolo...
To qualify, please please publish your work (or publish a post linking to it) on the Effective Altruism Forum, the AI Alignment Forum, or LessWrong with a "Future Fund worldview prize" tag. You can also participate in the contest by publishing your submission somewhere else (e.g. arXiv or your blog) and filling out this submission form. We will then linkpost/crosspost to your submission on the EA Forum.
I think it would be nicer if you say your P(Doom|AGI in 2070) instead of P(Doom|AGI by 2070), because the second one implicitly takes into account your timelines.
I disagree. (At least, if defining "nicer" as "more useful to the stated goals for the prizes".)
As an interested observer, I think it's an advantage to take timelines into account. Specifically, I think the most compelling way to argue for a particular P(Catastrophe|AGI by 20__) to the FF prize evaluators will be:
are there candidate interventions that only require mobile equipment and not (semi-)permanent changes to buildings?
Fortunately, yes. Within-room UVC (upper-room 254nm and lower-room 222nm) can be provided by mobile lights on tripod stands.
This is what the JHU Center for Health Security did for their IAQ conference last month. (Pictures at https://twitter.com/DrNikkiTeran/status/1567864920087138304 )
(Speaking for myself and not my employer.)
US tax law requires that US citizens pay income tax and capital gains tax, regardless of their physical/legal residency. Some limited deductions apply, but don't change the basic story.
Are you proposing to bite the bullet on the $100/hr card charge scenario by the $50/hr staffer (paid "$47.5/hr plus perks" at the EA org)?
"Market rate" of $50/hr for labor netting $500/hr of value seems well within the distribution I'd expect (not to mention that EA orgs might value that work even more than any org in industry ever will, perhaps because we're counting the consumer surplus and un-capturable externalities and the industry employer won't).
(I'm a trader at a NY-based quant firm, and work on education and training for new traders, among other things.)
I'm nearly certain that your hiring manager (or anyone involved in hiring you) would be happy to receive literally this question from you, and would have advice specifically tailored to the firm you're joining.
The firm has very a strong interest in your success (likely more so than anyone you've interacted with in college), and they've already already committed to spending substantial resources to helping you prepare for a successful career as a ...
Finally, I expect that my earmarking of grant funds will be partially funged within the GFI organization, and I think this is inevitable, basically fine, and in fact weakly good.
I received a private request (from an early reviewer of this post) to expand on my thoughts here, so a few more words:
When making decisions under collective uncertainty, aggregating information is a hard problem (citation not required). I think that my relative opinions here push the world towards a more efficient allocation, but I recognize that my opinions about GFI are i...
Makes total sense not to invest in the charitable side -- I'm generally off a similar mind.[1] The reason I'm curious is that "consider it as two separate accounts" is the most-compelling argument I've seen against tithing investment gains. (The argument is basically, that if both accounts were fully-invested, then tithing gains from the personal account to the charity account leads to a total 4:1 ratio between them as withdrawal_time -> ∞, not a 9:1 ratio.[2] Then, why does distribution out of the charity account affect the 'right' additional... (read more)