Previously worked as Chief of Staff at the Institute for Law & AI (formerly Legal Priorities Project) and as COO at the Center on Long-Term Risk (formerly Effective Altruism Foundation). I also co-founded EA Munich in 2015. I have a PhD in Computational Science from TU Munich.
🔶 10% Pledger
Thanks so much for your comment!
Actually, someone else brought up this point separately, so I agree there's more to say here. I'd love to dig deeper into this question and possibly write a paper on the topic (e.g. for this collection). If you have literature to recommend (either by you or otherwise), please send it my way. And also let me know if you'd like to get involved in such a project. :)
Thank you for your comment, Tim!
Indeed, the choice of e is arbitrary and used for illustration purposes. And the base 6 is simply the choice for which the total burden of CH is larger than that of migraines, so it's also not derived from first principles. This footnote is relevant:
The resulting scaling as would mean that the 0–10 scale would have to span 4 orders of magnitude. While Gómez-Emilsson & Percy (2023) suggest the scale spans “at least two orders of magnitude”, private communication with the authors indicates their central estimates might be closer to 4 orders of magnitude, with uncertainty ranging from 2 to 8 OOMs.
The paper cited also mentions the possibility of a linear relationship for lower pain intensities and an exponential relationship at higher intensities (a "kinked" distribution), highlighting the fact that there are more possibilities beyond a uniform exponential increase.
I personally don't have a good intuition for what the base should be but might do more work on this specific question.
I'm also not sure what the optimal mapping of intensities for the Russell vs Torelli & Manzoni scales is, also considering the fact that the two studies had different methodologies. I think there's no correct answer, so that was my best guess (though I could also imagine "Very slight" being more intense than a 1.5). Do let me know if you have other suggestions! (Or feel free to fork the code and play around with the parameters. :) )
Thanks! It'd be great if someone (maybe myself, but ideally someone with more experience in the field) published a summary of the existing literature (more research here). Having spent so many hours reading up on the topic these past few months, I'm optimistic about the efficacy. I think funding and/or running a large scale RCT in particular for N,N-DMT (in a country where it is legal) would be a great use of EA money/time.
I think the EA community has shown incredible initiative in tackling major global health issues, making a lot of progress on problems such as malaria (which causes 600,000 deaths/year) and lead poisoning (which causes 1.5M deaths/year), among so many others. These efforts really show our ability to mobilize resources and drive change when we identify pressing problems.
My hope is that we can direct a similar amount of attention to helping the ~3 million people worldwide who have this terrible condition. Even if my quantitative estimates of the burden of pain were off by an order of magnitude, the situation would still be tragic (and, as @algekalipso has pointed out, somewhat analogous to times when anesthesia had already been invented but not adopted, given the promise of low dose psychedelics1). I think it would be an incredible success story for our community if we managed to eliminate (or at least significantly reduce) this source of enormous suffering. If you’d like to contribute in any way—either with time or funding—please get in touch!
1 Coincidentally, when I asked Claude to estimate the lifetime prevalence of undergoing major surgery without general anesthesia before it was invented, its initial guess was surprisingly similar to the lifetime prevalence of cluster headaches—0.2%.
I think that a corollary of the first point is that we can learn a lot about alignment by looking at humans who seem unusually aligned to human values (although I think more generally to the interests of all conscious beings), e.g. highly attained meditators with high integrity, altruistic motivations, rationality skills, and a healthy balance of sytematizer and empathizer mindsets. From phenomenological reports, their subagentic structures seem quite unlike anything most of us experience day to day. That, plus a few core philosophical assumptions, can get you a really long way in deducing e.g. Anthropic's constitutional AI principles from first principles.
Thanks for sharing, Deborah! I'll add these resources to my list of interventions. :)