A

algekalipso

1188 karmaJoined

Bio

Consciousness researcher and co-founder of the Qualia Research Institute. I blog at qualiacomputing.com

Core interests span - measuring emotional valence objectively, formal models of phenomenal space and time, the importance of phenomenal binding, models of intelligence based on qualia, and neurotechnology.

Comments
47

I just want to comment in support of this: it's hard to realize how big of a "force multiplier" student groups can be. It's easy to dismiss their influence from the inside when you're in university with everything else going on. And it's hard to see why they matter from the outside. But in reality, these hubs are really powerful incubators that generate life-long connections. Forget even just the local effects: the credibility they give to ideas, the ability to invite speakers to your university to give talks (which tend to be win-win-win situations), and the direct intellectual output they generate. The true, oversized value prop of these groups, is that they create pockets of value-aligned, high-agency kids who feel empowered to follow their intellectual and moral instincts as a community. I strongly encourage to take the time and energy to make them happen and to keep them going. :-)

Thank you! I will get back to you. My honest reaction to this comment: "AAAAAHHHH damn, this should be so incredibly obvious!! where to even start?" But I recognize I'm deep into the inside view of log scales, phenomenology visualization, valence quantification, etc. What seems mind-numbingly foot-stompingly obvious to me might not be to others. Do I say: "PLEASE BE CREATIVE" or should I write a treaties expanding on every point? I'll do the latter. And I apologize for my candid reaction comment XD Seriously, I appreciarte the comment. It's just... ahhhhhh! 

 

[to be clear you raise very good points - I'm just trying to do the prosocial thing and communicate the large "inference gap" on this topic that currently exists - I'll do better! Will respond object level. Cheers!]

Thank you! This is a genuinely good question. (Note: I answered via voice and then edited the transcript below with Chat - can circle back if style is an issue, but this covers every point I discussed - if doing this is a problem for some reason I'm happy to write anew! The content is correct): 

Your question surfaces the key misunderstanding. The claim isn’t that we should fear drugs that help some people and hurt others. It’s that our measurement architecture is set up in a way that systematically misclassifies who is helped, who is harmed, and by how much, because the scales themselves flatten the underlying geometry of experience. Once you compress long-tailed intensities into a 1–10 box and then average them, you lose the structure that actually matters for real-world well-being.

In a world where symptoms behave linearly and add up nicely, a drug that helps half and hurts half is perfectly intelligible: you imagine two overlapping Gaussians, shrug, and say “worth a try.” But that isn’t the world we actually inhabit. If rumination goes from 6 to 4, the subjective win might be modest because you’re moving along a shallow part of the curve. If akathisia goes from 6 to 8, the subjective loss might be massive because you’ve crossed into a steep tail where each step carries exponential experiential weight. On the form, these are both “two-point changes.” In lived reality, they belong to different moral universes. This asymmetry in the tails means that “50% better, 50% worse” is not a neutral mixture; the average hides the fact that the extremes on one side can dominate the arithmetic.

I don't think it is abstract or merely theoretical, or too complex to do anything about. It has immediate practical consequences. Trials and regulators work with compressed reports, so the deepest harms appear as mild perturbations in the dataset. Drugs whose side-effect profiles involve steep-tail states like akathisia or mixed autonomic rebound look safer than they really are for a meaningful minority of users. Clinicians then inherit an evidence base where the worst experiential states have been squashed into “mild adverse events,” and that shapes expectations, heuristics, and prescribing norms. The problem is not clinician negligence — it’s that the underlying data they rely on has already thrown away the signal.

If we took the geometry seriously, we’d end up with a very different picture. High-variance drugs can be extraordinarily useful when we know how to identify responders and anti-responders. What we’re missing is the mapping. With better instruments, you’d get early detection of bad trajectories, N-of-1 response curves, and a more honest sense of which symptom profiles are compatible with which medications. The same drug could be life-changing for one subgroup and acutely harmful for another, and we could actually see that, instead of blending the two together into a 0.3σ effect size. This is less “anti-medication” and more “finally doing the epistemology correctly.”

Good clinical practice already tries to rely on patient narratives, but even that is downstream of the larger culture of interpretation we’ve built on top of flattened scales. When the scientific literature underweights the steepest affective states, everyone downstream learns to underweight them too. The patient who says “this made my inner restlessness unbearable” is intuitively competing with a literature that reports only “mild activation” for the same phenomenon. Countless examples of victims of this dynamic can be mentioned and documented.

The upshot is simple: this not trying to be pessimistic toward psychiatric meds. The core point is about epistemic clarity. The experiential landscape is long-tailed, clustered, and nonlinear; our measurement system is linear, additive, and tidy. When you force one onto the other, you get averages that obscure the very variation we need to guide good decisions. A better measurement pipeline wouldn’t make us more cautious or more reckless; it would make us more accurate. And accuracy is the only way to use high-variance interventions wisely — whether you’re trying to help one patient or setting policy for millions.

If the world ran fully on the arithmetic of symptom sheets, none of this would matter. But the world runs on compounding long-tail distributions of suffering and relief, and that geometry is strange, heavy-tailed, and morally lopsided. Our tools need to catch up.

Indeed, we're talking about 3mg vaporized for the typical sufferer. I've interviewed people who indeed described it as "from 10/10 pain to 1/10 or 0/10 withon 10 seconds" and they're not exagerating. Percent who responds this way? Likely 70% or so. For the rest they need higher doses.

I think DMT is also only psychologically risky above Magic Eye level, for which you need to take quite a bit more than the cluster headache range. But STILL, the worst DMT bad trip is still orders of magnitude better than a cluster headache. Everyone I've talked to who uses DMT for clusters says they wished they had tried it earlier. From experience, even high dose DMT is physically not as unpleasant as breaking a bone. Like, burning sensations and feelings of intense pressure. Unpleasant, but not that bad, really. 

Note, MAOI might actually deter against DMT's effect on clusters. It works best when no other medication or drug in your system. 

Very good points. QRI has a lot to say about all of these points, so I won't repeat myself too much. I'll address a couple points, but note there is a lot more to say that you can dig into in the provided links:

I think that pain is a particular manifestation of negative valence. It can trigger positive valence indirectly via e.g. energizing the system and then triggering neural annealing - nice waves of euphoria that are secondary and after the fact which are the reinforcing bits. Pain sans this secondary element is just unpleasant and bad, albeit perhaps not as bad as what you get when you mix both emotional and low level sensory negative valence together. 

Importantly: mixed valence certainly complicates the picture, but above a certain intensity of sensation pain overwhelms whetever else is going on.

Shinzen Young's concept of equanimity as not resisting sensations is a critical modifier on the valence of experience. That said, I'd say this modifies valence of the whole experience, and doesn't necessarily take care of the low-level sensory negative valence. Stil, the bulk of a person's valence under normal circumstances might be the result of how much they resist their current experience (even if pleasant otherwise). Extreme levels of equanimity, I'm convinced, can drastically lower the negative valence effects of pain. 

This, however, has a limit. Even highly attained meditators (the "Buddha" included) would describe extreme pain as still a cause of suffering. Daniel Ingram, for example, can tolerate broken bones and all kinds of very intense painful sensations without them turning into suffering, so to speak. But when he has a kidney stone then that overwhelms the system and even his meditation attainments aren't enough to counter-balance it. He still suffers intensely with kidney stones when they happen.

Lastly, most intense forms of pain tend to have a strong emotional component by default. Cluster headaches, for example, typically come with a powerful sense of doom along with the pain. It's just part of the package it seems.

https://qualiacomputing.com/2021/04/04/buddhist-annealing-wireheading-done-right-with-the-seven-factors-of-awakening/

https://forum.effectivealtruism.org/posts/bvtAXefTDQgHxc9BR/just-look-at-the-thing-how-the-science-of-consciousness

https://qri.org/blog/symmetry-theory-of-valence-2020

https://qualiacomputing.com/2019/09/30/harmonic-society-3-4-art-as-state-space-exploration-and-energy-parameter-modulation/

Thank you for this fascinating post. I'll share here what I posted on Twitter too:

 

I have many reasons why I don't think we should care about non-conscious agency, and here are some of them:

1) That which lacks frame invariance cannot be truly real. Algorithms are not real. They look real from the point of view of (frame invariant) experiences that *interpret* them. Thus, there is no real sense in which an algorithm can have goals - they only look like it from our (integrated) point of view. It's useful for us, pragmatically, to model them that way. But that's different from them actually existing in any intrinsic substantial way.

2) The phenomenal texture of valence is deeply intertwined with conscious agency when such agency matters. The very sense of urgency that drives our efforts to reduce our suffering has a *shape* with intrinsic causal effects. This shape and its causal effects only ever cash out as such in other bound experiences. So the very _meaning_ of agency, at least in so far as moral intuitions are concerned, is inherently tied to its sentient implementation.

3) Values are not actually about states of world, and that is because states of the world aside from moments of experience don't really exist. Or at least we have no reason to believe they exist. As you increase the internal coherence of one's understanding of conscious agency, it becomes, little by little, clear that the underlying *referent* of our desires were phenomenal states all along, albeit with levels of indirection and shortcuts.

4) Even if we were to believe that non-sentient agency (imo an oxymoron) is valuable, we would have also good reasons to believe it is in fact disvaluable. Intense wanting is unpleasant, and thus sufficiently self-reflective organisms try to figure out how to realize their values with as little desire as possible.

5) Open Individualism, Valence Realism, and Math can provide a far more coherent system of ethics than any other combo I'm aware of, and they certainly rule out non-conscious agency as part of what matters.

6) Blindsight is poorly understood. There's an interesting model of how it works where our body creates a kind of archipelago of moments of experience, in which there is a central hub and then many peripheral bound experiences competing to enter that hub. When we think that a non-conscious system in us "wants something", it might very well be because it indeed has valence that motivates it in a certain way. Some exotic states of consciousness hint at this architecture - desires that seem to "come from nowhere" are in fact already the result of complex networks of conscious subagents merging and blending and ultimately binding to the central hub.

------- And then we have pragmatic and political reasons, where the moment we open the floodgates of insentient agency mattering intrinsically, we risk truly becoming powerless very fast. Even if we cared about insentient agency, why should we care about insentient agency in potential? Their scaling capabilities, cunning, and capacity for deception might quickly flip the power balance in completely irreversible ways, not unlike creating sentient monsters with radically different values than humans.

Ultimately I think value is an empirical question, and we already know enough to be able to locate it in conscious valence. Team Consciousness must wise up to avoid threats from insentient agents and coordinate around these risks catalyzed by profound conceptual confusion.

Thank you Gavin (algekalipso here).

I think that the most important EA-relevant link for #1 would be this: Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering 

For a summary, see: Review of Log Scales.

In particular, I do think aspiring EAs should take this much more seriously:

An important pragmatic takeaway from this article is that if one is trying to select an effective career path, as a heuristic it would be good to take into account how one’s efforts would cash out in the prevention of extreme suffering (see: Hell-Index), rather than just QALYs and wellness indices that ignore the long-tail. Of particular note as promising Effective Altruist careers, we would highlight working directly to develop remedies for specific, extremely painful experiences. Finding scalable treatments for migraines, kidney stones, childbirth, cluster headaches, CRPS, and fibromyalgia may be extremely high-impact (cf. Treating Cluster Headaches and Migraines Using N,N-DMT and Other Tryptamines, Using Ibogaine to Create Friendlier Opioids, and Frequency Specific Microcurrent for Kidney-Stone Pain). More research efforts into identifying and quantifying intense suffering currently unaddressed would also be extremely helpful. Finally, if the positive valence scale also has a long-tail, focusing one’s career in developing bliss technologies may pay-off in surprisingly good ways (whereby you may stumble on methods to generate high-valence healing experiences which are orders of magnitude better than you thought were possible).

Best,

Andrés :)

This post significantly adds to the conversation in Effective Altruism about how pain is distributed. As explained in the review of Log Scales, understanding that intense pain follows a long-tail distributions significantly changes the effectiveness landscape for possible altruistic interventions. In particular, this analysis shows that finding the top 5% of people who suffer the most in a given medical condition and treating them as the priority will allow us to target a very large fraction of the total pain such a condition generates. In the case of cluster headaches, the distribution is extremely skewed: 5% of sufferers experience over 50% of all cluster headaches. 

More so, the survey also showed that the leading cause for why sufferers don't use tryptamines to treat their condition is the difficulty of acquiring them. Thus, changing the legal landscape via e.g. providing programs for the easy access to tryptamines to sufferers of migraines and cluster headaches might be a very cost-effective way of massively reducing suffering throughout the world.

Zooming out, perhaps the significance of this goes beyond cluster headaches in particular: it perhaps hints at a more significant paradigmatic change for analyzing the cost-effectiveness of interventions.

As explained in the review of Log Scales, cluster headaches are some of the most painful experiences people can have in life. If a $5 DMT Vape Pen produced at scale is all it takes to fully take care of the problem for people sufferers, this stands to be an Effective Altruist bargain.

In the future, I would love to see more analysis of this sort. Namely, analysis that look at particular highly painful conditions (the "pain points of humanity", as it were), and identify tractable, cost-effective solutions to them. Given the work in this area so far, I expect this to generate dozens of interventions that, in aggregate, might take care of perhaps even the majority of dolors experienced by people.

Most people who know about drugs tend to have an intuitive model of drug tolerance where "what goes up must come down". In this piece, the author shows that this intuitive model is wrong, for drug tolerance can be reversed pharmacologically. This seems extremely important in the context of pain relief: for people who simply have no option but to take opioids to treat their chronic pain, anti-tolerance would be a game-changer. I sincerely believe this will be a paradigm shift in the world of pain management, with a clear before-and-after cultural shift around it. But before that, a lot of foundational research needs to take place. That's the stage we are at.

We anticipate and hope that the field of anti-tolerance drugs  soon materializes in an academically credible way. Given how common chronic pain is, we would all benefit from its fruits in the future.

Load more