MikeJohnson

MikeJohnson's Comments

Reducing long-term risks from malevolent actors

A core 'hole' here is metrics for malevolence (and related traits) visible to present-day or near-future neuroimaging.

Briefly -- Qualia Research Institute's work around connectome-specific harmonic waves (CSHW) suggests a couple angles:

(1) proxying malevolence via the degree to which the consonance/harmony in your brain is correlated with the dissonance in nearby brains;
(2) proxying empathy (lack of psychopathy) by the degree to which your CSHWs show integration/coupling with the CSHWs around you.

Both of these analyses could be done today, given sufficient resource investment. We have all the algorithms and in-house expertise.

Background about the paradigm: https://opentheory.net/2018/08/a-future-for-neuroscience/

Intro to Consciousness + QRI Reading List

Very important topic! I touch on McCabe's work in Against Functionalism (EA forum discussion); I hope this thread gets more airtime in EA, since it seems like a crucial consideration for long-term planning.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Hey Pablo! I think Andres has a few up on Metaculus; I just posted QRI's latest piece of neuroscience here, which has a bunch of predictions (though I haven't separated them out from the text):

https://opentheory.net/2019/11/neural-annealing-toward-a-neural-theory-of-everything/

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

We’ve looked for someone from the community to do a solid ‘adversarial review’ of our work, but we haven’t found anyone that feels qualified to do so and that we trust to do a good job, aside from Scott, and he's not available at this time. If anyone comes to mind do let me know!

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

I think this is a great description. "What happens if we seek out symmetry gradients in brain networks, but STV isn't true?" is something we've considered, and determining ground-truth is definitely tricky. I refer to this scenario as the "Symmetry Theory of Homeostatic Regulation" - https://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/ (mostly worth looking at the title image, no need to read the post)

I'm (hopefully) about a week away from releasing an update to some of the things we discussed in Boston, basically a unification of Friston/Carhart-Harris's work on FEP/REBUS with Atasoy's work on CSHW -- will be glad to get your thoughts when it's posted.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

I think we actually mostly agree: QRI doesn't 'need' you to believe qualia are real, that symmetry in some formalism of qualia corresponds to pleasure, that there is any formalism about qualia to be found at all. If we find some cool predictions, you can strip out any mention of qualia from them, and use them within the functionalism frame. As you say, the existence of some cool predictions won't force you to update your metaphysics (your understanding of which things are ontologically 'first class objects').

But- you won't be able to copy our generator by doing that, the thing that created those novel predictions, and I think that's significant, and gets into questions of elegance metrics and philosophy of science.

I actually think the electromagnetism analogy is a good one: skepticism is always defensible, and in 1600, 1700, 1800, 1862, and 2018, people could be skeptical of whether there's 'deep unifying structure' behind these things we call static, lightning, magnetism, shocks, and so on. But it was much more reasonable to be skeptical in 1600 than in 1862 (the year Maxwell's Equations were published), and more reasonable in 1862 than it was in 2018 (the era of the iPhone).

Whether there is 'deep structure' in qualia is of course an open question in 2019. I might suggest STV is equivalent to a very early draft of Maxwell's Equations: not a full systematization of qualia, but something that can be tested and built on in order to get there. And one that potentially ties together many disparate observations into a unified frame, and offers novel / falsifiable predictions (which seem incredibly worth trying to falsify!)

I'd definitely push back on the frame of dualism, although this might be a terminology nitpick: my preferred frame here is monism: https://opentheory.net/2019/06/taking-monism-seriously/ - and perhaps this somewhat addresses your objection that 'QRI posits the existence of too many things'.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Thanks Matthew! I agree issues of epistemology and metaphysics get very sticky very quickly when speaking of consciousness.

My basic approach is 'never argue metaphysics when you can argue physics' -- the core strategy we have for 'proving' we can mathematically model qualia is to make better and more elegant predictions using our frameworks, with predicting pain/pleasure from fMRI data as the pilot project.

One way to frame this is that at various points in time, it was completely reasonable to be a skeptic about modeling things like lightning, static, magnetic lodestones, and such, mathematically. This is true to an extent even after Faraday and Maxwell formalized things. But over time, with more and more unusual predictions and fantastic inventions built around electromagnetic theory, it became less reasonable to be skeptical of such.

My metaphysical arguments are in my 'Against Functionalism' piece, and to date I don't believe any commenters have addressed my core claims:

https://forum.effectivealtruism.org/posts/FfJ4rMTJAB3tnY5De/why-i-think-the-foundational-research-institute-should#6Lrwqcdx86DJ9sXmw

But, I think metaphysical arguments change distressingly few peoples' minds. Experiments and especially technology changes peoples' minds. So that's what our limited optimization energy is pointed at right now.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

QRI is tackling a very difficult problem, as is MIRI. It took many, many years for MIRI to gather external markers of legitimacy. My inside view is that QRI is on the path of gaining said markers; for people paying attention to what we're doing, I think there's enough of a vector right now to judge us positively. I think these markers will be obvious from the 'outside view' within a short number of years.

But even without these markers, I'd poke at your position from a couple angles:

I. Object-level criticism is best

First, I don't see evidence you've engaged with our work beyond very simple pattern-matching. You note that "I also think that I'm somewhat qualified to assess QRI's work (as someone who's spent ~100 paid hours thinking about philosophy of mind in the last few years), and when I look at it, I think it looks pretty crankish and wrong." But *what* looks wrong? Obviously doing something new will pattern-match to crankish, regardless of whether it is crankish, so in terms of your rationale-as-stated, I don't put too much stock in your pattern detection (and perhaps you shouldn't either). If we want to avoid accidentally falling into (1) 'negative-sum status attack' interactions, and/or (2) hypercriticism of any fundamentally new thing, neither of which is good for QRI, for MIRI, or for community epistemology, object-level criticisms (and having calibrated distaste for low-information criticisms) seem pretty necessary.

Also, we do a lot more things than just philosophy, and we try to keep our assumptions about the Symmetry Theory of Valence separate from our neuroscience - STV can be wrong and our neuroscience can still be correct/useful. That said, empirically the neuroscience often does 'lead back to' STV.

Some things I'd offer for critique:

https://opentheory.net/2018/08/a-future-for-neuroscience/#

https://opentheory.net/2018/12/the-neuroscience-of-meditation/

https://www.qualiaresearchinstitute.org/research-lineages

(you can also watch our introductory video for context, and perhaps a 'marker of legitimacy', although it makes very few claims https://www.youtube.com/watch?v=HetKzjOJoy8 )

I'd also suggest that the current state of philosophy, and especially philosophy of mind and ethics, is very dismal. I give my causal reasons for this here: https://opentheory.net/2017/10/rescuing-philosophy/ - I'm not sure if you're anchored to existing theories in philosophy of mind being reasonable or not.


II. What's the alternative?

If there's one piece I would suggest engaging with, it's my post arguing against functionalism. I think your comments presuppose functionalism is reasonable and/or the only possible approach, and the efforts QRI is putting into building an alternative are certainly wasted. I strongly disagree with this; as I noted in my Facebook reply,

>Philosophically speaking, people put forth analytic functionalism as a theory of consciousness (and implicitly a theory of valence?), but I don't think it works *qua* a theory of consciousness (or ethics or value or valence), as I lay out here: https://forum.effectivealtruism.org/.../why-i-think-the...-- This is more-or-less an answer to some of Brian Tomasik's (very courageous) work, and to sum up my understanding I don't think anyone has made or seems likely to make 'near mode' progress, e.g. especially of the sort that would be helpful for AI safety, under the assumption of analytic functionalism.

https://forum.effectivealtruism.org/posts/FfJ4rMTJAB3tnY5De/why-i-think-the-foundational-research-institute-should#6Lrwqcdx86DJ9sXmw

----------

I always find in-person interactions more amicable & high-bandwidth -- I'll be back in the Bay early December, so if you want to give this piece a careful read and sit down to discuss it I'd be glad to join you. I think it could have significant implications for some of MIRI's work.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Buck- for an internal counterpoint you may want to discuss QRI's research with Vaniver. We had a good chat about what we're doing at the Boston SSC meetup, and Romeo attended a MIRI retreat earlier in the summer and had some good conversations with him there also.

To put a bit of a point on this, I find the "crank philosophy" frame a bit questionable if you're using only thin-slice outside view and not following what we're doing. Probably, one could use similar heuristics to pattern-match MIRI as "crank philosophy" also (probably, many people have already done exactly this to MIRI, unfortunately).

Load More