Most likely infectious diseases also play a significant role in aging- have seen some research suggesting that major health inflection points are often associated with an infection.
I like your post and strongly agree with the gist.
DM me if you’re interested in brainstorming alternatives to the vaccine paradigm (which seems to work much better for certain diseases than others).
Generally speaking, I agree with the aphorism “You catch more flies with honey than vinegar;”
For what it’s worth, I interpreted Gregory’s critique as an attempt to blow up the conversation and steer away from the object level, which felt odd. I’m happiest speaking of my research, and fielding specific questions about claims.
Gregory, I’ll invite you to join the object-level discussion between Abby and I.
Welcome, thanks for the good questions.
Asymmetries in stimuli seem crucial for getting patterns through the “predictive coding gauntlet.” I.e., that which can be predicted can be ignored. We demonstrably screen perfect harmony out fairly rapidly.
The crucial context for STV on the other hand isn’t symmetries/asymmetries in stimuli, but rather in brain activity. (More specifically, as we’re currently looking at things, in global eigenmodes.)
With a nod back to the predictive coding frame, it’s quite plausible that the stimuli that create the most internal symmetry/harmony are not themselves perfectly symmetrical, but rather have asymmetries crafted to avoid top-down predictive models. I’d expect this to vary quite a bit across different senses though, and depend heavily on internal state.
The brain may also have mechanisms which introduce asymmetries in global eigenmodes, in order to prevent getting ‘trapped’ by pleasure — I think of boredom as fairly sophisticated ‘anti-wireheading technology’ — but if we set aside dynamics, the assertion is that symmetry/harmony in the brain itself is intrinsically coupled with pleasure.
Edit: With respect to the Mosers, that’s really cool example of this stuff. I can’t say I have answers here but as a punt, I’d suspect the “orthogonal neural coding of similar but distinct memories” is going to revolve around some pretty complex frequency regimes and we may not yet be able to say exact things about how ‘consonant’ or ‘dissonant’ these patterns are to each other yet. My intuition is that this result about the golden mean being the optimal ratio for non-interaction will end up intersecting with the Mosers’ work. That said I wonder if STV would assert that some sorts of memories are ‘hedonically incompatible’ due to their encodings being dissonant? Basically, as memories get encoded, the oscillatory patterns they’re encoded with could subtly form a network which determines what sorts of new memories can form and/or which sorts of stimuli we enjoy and which we don’t. But this is pretty hand-wavy speculation…
Hi Abby, I understand. We can just make the best of it.
1a. Yep, definitely. Empirically we know this is true from e.g. Kringelbach and Berridge’s work on hedonic centers of the brain; what we’d be interested in looking into would be whether these areas are special in terms of network control theory.
1c. I may be getting ahead of myself here: the basic approach to testing STV we intend is looking at dissonance in global activity. Dissonance between brain regions likely contribute to this ‘global dissonance’ metric. I’m also interested in measuring dissonance within smaller areas of the brain as I think it could help improve the metric down the line, but definitely wouldn’t need to at this point.
1d. As a quick aside, STV says that ‘symmetry in the mathematical representation of phenomenology corresponds to pleasure’. We can think of that as ‘core STV’. We’ve then built neuroscience metrics around consonance, dissonance, and noise that we think can be useful for proxying symmetry in this representation; we can think of that as a looser layer of theory around STV, something that doesn’t have the ‘exact truth’ expectation of core STV. When I speak of dissonance corresponding to suffering, it’s part of this looser second layer.
To your question — why would STV be true? — my background is in the philosophy of science, so I’m perhaps more ready to punt to this domain. I understand this may come across as somewhat frustrating or obfuscating from the perspective of a neuroscientist asking for a neuroscientific explanation. But, this is a universal thread across philosophy of science: why is such and such true? Why does gravity exist; why is the speed of light as it is? Etc. Many things we’ve figured out about reality seem like brute facts. Usually there is some hints of elegance in the structures we’re uncovering, but we’re just not yet knowledgable to see some universal grand plan. Physics deals with this a lot, and I think philosophy of mind is just starting to grapple with this in terms of NCCs. Here’s something Frank Wilczek (won the 2004 Nobel Prize in physics for helping formalize the Strong nuclear force) shared about physics:
>... the idea that there is symmetry at the root of Nature has come to dominate our understanding of physical reality. We are led to a small number of special structures from purely mathematical considerations--considerations of symmetry--and put them forward to Nature, as candidate elements for her design. ... In modern physics we have taken this lesson to heart. We have learned to work from symmetry toward truth. Instead of using experiments to infer equations, and then finding (to our delight and astonishment) that the equations have a lot of symmetry, we propose equations with enormous symmetry and then check to see whether Nature uses them. It has been an amazingly successful strategy. (A Beautiful Question, 2015)
So — why would STV be the case? ”Because it would be beautiful, and would reflect and extend the flavor of beauty we’ve found to be both true and useful in physics” is probably not the sort of answer you’re looking for, but it’s the answer I have at this point. I do think all the NCC literature is going to have to address this question of ‘why’ at some point.
4. We’re ultimately opportunistic about what exact format of neuroimaging we use to test our hypotheses, but fMRI checks a lot of the boxes (though not all). As you say, fMRI is not a great paradigm for neurotech; we’re looking at e.g. headsets by Kernel and others, and also digging into the TUS (transcranial ultrasound) literature for more options.
5. Cool! I’ve seen some big reported effect sizes and I’m generally pretty bullish on neurofeedback in the long term; Adam Gazzaley‘s Neuroscape is doing some cool stuff in this area too.
Good catch; there’s plenty that our glossary does not cover yet. This post is at 70 comments now, and I can just say I’m typing as fast as I can!
I pinged our engineer (who has taken the lead on the neuroimaging pipeline work) about details, but as the collaboration hasn’t yet been announced I’ll err on the side of caution in sharing.
To Michael — here’s my attempt to clarify the terms you highlighted:
-> existing theories talk about what emotions ‘do’ for an organism, and what neurochemicals and brain regions seem to be associated with suffering
Frank Wilczek calls symmetry ‘change without change’. A limited definition is that it’s a measure of the number of ways you can rotate a picture, and still get the same result. You can rotate a square 90 degrees, 180 degrees, and 270 degrees and get something identical; you can rotate a circle any direction and get something identical. Thus we’d say circles have more rotational symmetries than squares (who have more than rectangles, etc)
Harmony has been in our vocabulary a long time, but it’s not a ‘crisp’ word. This is why I like to talk about symmetry, rather than harmony — although they more-or-less point in the same direction
The combination of multiple frequencies that have a high amount of interaction, but few common patterns. Nails on a chalkboard create a highly dissonant sound; playing the C and C# keys at the same time also creates a relatively dissonant sound
I’m not sure I can give a fully satisfying definition here that doesn’t just reference CSHW; I’ll think about this one more.
A way of mathematically calculating how much consonance, dissonance, and noise there is when we add different frequencies together. This is an algorithm developed at QRI by my co-founder, Andrés
A system which isn’t designed by some intelligent person, but follows an organizing logic of its own. A beehive or anthill would be a self-organizing system; no one’s in charge, but there’s still something clever going on
In November 2019 I released a work speaking of the brain as a self-organizing system. Basically, “when the brain is in an emotionally intense state, change is easier” similar to how when metal heats up and starts to melt, it’s easier to change the shape of the metal
All the software we need to do an analysis (and specifically, the CSHW analysis), from start to finish
A perfect theory of consciousness, which could be applied to anything. Basically a “consciousness meter”
Ah yes this is a litttttle bit dense. Basically, one big thing holding back neurotech is we don’t have good biomarkers for well-being. If we design these biomarkers, we can design neurofeedback systems which work better (not sure how familiar you are with neurofeedback)
Hi Abby, thanks for the questions. I have direct answers to 2,3,4, and indirect answers to 1 and 5.
1a. Speaking of the general case, we expect network control theory to be a useful frame for approaching questions of why certain sorts of activity in certain regions of the brain are particularly relevant for valence. (A simple story: hedonic centers of the brain act as ‘tuning knobs’ toward or away from global harmony. This would imply they don’t intrinsically create pleasure and suffering, merely facilitate these states.) This paper from the Bassett lab is the best intro I know of to this;
1b. Speaking again of the general case, asynchronous firing isn’t exactly identical to the sort of dissonance we’d identify as giving rise to suffering: asynchronous firing could be framed as in uncorrelated firing, or ‘non-interacting frequency regimes’. There’s a really cool paper asserting that the golden mean is the optimal frequency ratio for non-interaction, and some applications to EEG work, in case you’re curious. What we’re more interested in is frequency combinations that are highly interacting, and lacking a common basis set. An example would be playing the C and C# keys on a piano. This lens borrows more from music theory and acoustics (e.g. Helmholtz, Sethares) than traditional neuroscience although it lines up with some work by e.g. Buzsáki (Rhythms of the Brain); Friston has also done some cool work here on frequencies, communication, and birdsong, although I’d have to find the reference.
1c. Speaking again of the general case, naively I’d expect dissonance somewhere in the brain to induce dissonance elsewhere in the brain. I‘d have to think about what reference I could point to here as I don’t know if you’ll share this intuition, but a simple analogy would be if many people are walking in a line, if someone trips, more people might trip; chaos begets chaos.
1d. Speaking, finally, of the specific case, I admit I have only a general sense of the structure of the brain networks in question and I’m hesitant to put my foot in my mouth by giving you an answer I have little confidence in. I’d probably punt to the general case, and say if there’s dissonance between these two regions, depending on the network control theory involved, it could be caused by dissonance elsewhere in the brain, and/or it could spread to elsewhere in the brain: i.e. it could be both cause and effect.
2&3. The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.
I.e. we’re definitely not intrinsically tied to source localization, but currently we just don’t see a way to get clean enough abstractions upon which we could compute consonance/dissonance/noise without source localization.
4. Usually we can, and usually it’s much better than trying to measure it with some brain scanner! The rationale for pursuing this line of research is that existing biomarkers for mood and well-being are pretty coarse. If we can design a better biomarker, it’ll be useful for e.g. neurotech wearables. If your iPhone can directly measure how happy you are, you can chart that, correlate it with things, and all sorts of things. “What you can measure, you can manage.” It could also lead to novel therapies and other technologies, and that’s probably what I’m most viscerally excited about. There are also more ‘sci-fi’ applications such as using this to infer the experience of artificial sentience.
5. This question is definitely above my pay grade; I take my special edge here to be helping build a formal theory and more accurate biomarkers for suffering, rather than public policy (e.g. Michael D. Plant‘s turf). I do suspect however that some of the knowledge gained from better biomarkers could help inform emotional wellness best practices, and these best practices could be used by everyone, not just people getting scanned. I also think some therapies that might arise out of having better biomarkers could heal some sorts of trauma more-or-less permanently, so the scanning would just need to be a one-time-thing, not continuous. But this gets into the weeds of implementation pretty quickly.
Hi Samuel, I think it’s a good thought experiment. One prediction I’ve made is that one could make an agent such as that, but it would be deeply computationally suboptimal: it would be a system that maximizes disharmony/dissonance internally, but seeks out consonant patterns externally. Possible to make but definitely an AI-complete problem.
Just as an idle question, what do you suppose the natural kinds of phenomenology are? I think this can be a generative place to think about qualia in general.
I feel we’ve been in some sense talking past each other from the start. I think I bear some of the responsibility for that, based on how my post was written (originally for my blog, and more as a summary than an explanation).
I’m sorry for your frustration. I can only say I’m not intentionally trying to frustrate you, but that we appear to have very different styles of thinking and writing and this may have caused some friction, and I have been answering object-level questions from the community as best I can.
I really appreciate you putting it like this, and endorse everything you wrote.
I think sometimes researchers can get too close to their topics and collapse many premises and steps together; they sometimes sort of ‘throw away the ladder’ that got them where they are, to paraphrase Wittgenstein. This can make it difficult to communicate to some audiences. My experience on the forum this week suggests this may have happened to me on this topic. I’m grateful for the help the community is offering on filling in the gaps.