Hide table of contents

Key Points

  • Theoretical & empirical work on emotional valence is Qualia Research Institute’s core research focus. At a high level, Qualia Research Institute (QRI) works to formulate universal theories of valence[1], test those theories experimentally, and build non-invasive neurotechnology that reliably produces large positive valence effects.
  • QRI has formed partnerships to work with clinical data from psychedelic studies at King's College London, Imperial College London, and the National Institute of Mental Health of the Czech Republic.
  • QRI is collaborating with researchers from Harvard Medical School and the Emergent Phenomenology Research Consortium to analyze electroencephalographic (EEG) data from jhana meditation sessions.
  • QRI fills an important niche in the consciousness research, psychedelic research[2], and Effective Altruist ecosystems.
  • On the present margin, QRI is more constrained by funding than by talent.
  • We’re confident that we could put $1.5M to good use over the next two years.
  • With further funding, QRI would pursue its research agenda and expand its team by hiring a signal processing engineer, a computational neuroscience PhD, and a software engineer.
  • QRI’s 2021 research agenda includes empirically exploring the Symmetry Theory of Valence (STV) on high-valence data sets, developing non-invasive neurotechnology that produces large positive valence effects, building out our psychophysics toolkit and using it to collect data, publishing open-source neuroimaging software, and investigating how to generalize the Connectome-Specific Harmonic Wave (CSHW) framework into a scale-free form.
  • You can donate to QRI on our website or reach out to Mackenzie Dion, Director of Operations & Development, at mackenzie@qualiaresearchinstitute.org
  • You can keep up with our research by subscribing to our newsletter or following us on Twitter

Why Does Qualia Research Institute Exist?

Isn’t it perplexing that we’re trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don’t have a precise definition for either suffering or happiness?

As a human collective, we want to create good futures. This encompasses helping humans have happier lives, preventing intense suffering wherever it may exist, creating safe AI, and improving animals’ lives too, both in farms and in the wild.

But what is happiness? And what is suffering?

Until we can talk about these things objectively, let alone measure and quantify them reliably, we’ll always be standing in murky water.[3]

The core question remains the same:

How do we objectively measure and quantify properties of conscious experience?

Qualia Research Institute’s Mission

Qualia Research Institute (QRI) is a non-profit research center operating outside of academia. We are a small team of independent researchers studying consciousness in a consistent, meaningful, and rigorous way.

Our goals are ambitious yet realistic:

  1. Develop a precise mathematical language for describing consciousness
  2. Understand the nature of emotional valence (pain and pleasure)
  3. Build technologies to improve people’s lives in meaningful ways, at scale

As far as we know, we are the first and only research group that uses a qualia formalist[4] approach to consciousness, cares about valence, and takes the phenomenology of altered states of consciousness seriously.[5]

How We Expect To Make Progress Toward Our Mission

Developing an accurate and precise ‘valence meter’ is one of QRI’s north stars. This goal is similar to (though distinct from) Integrated Information Theory researchers’ attempt to invent an accurate ‘consciousness meter’.

We intend to make progress towards this goal by generating theories of valence that are universal in their explanatory power, attempting to experimentally corroborate our theoretical models with state-of-the-art techniques in neuroscience, and building non-invasive neurotechnology that can reliably produce large positive effects based on our developing theory of valence.

In addition to our valence-specific research plans, we believe that mapping out exotic properties of consciousness will provide valuable clues about how consciousness fundamentally works. An analogy is that ignoring exotic states of consciousness is similar to pre-enlightenment scientists trying to understand energy, matter, and the physical world just by studying it at room temperature while disregarding extreme boundary conditions like the sun, black holes, plasma, and superfluid helium. Ignoring these states may be equivalent to refusing to look through Galileo's telescope.

Finally, we intend to become a hub for all types of high-valence biological data, including fMRI data from our academic partners, subjective reports of extremely blissful states, and genomic data of people who report a high degree of baseline happiness. Alongside this data collection, we aim to create an environment that fosters dialogue between researchers studying AI, neuroscience, meditation, and psychedelics.

QRI History & Past Work

Core Team Intro

The core QRI team now consists of:

Michael Edward Johnson - Co-Founder and Co-Director of Research at QRI. Mike is the author of Principia Qualia and blogs at opentheory.net.

Andrés Gómez Emilsson - Co-Founder and Co-Director of Research at QRI. Andrés has a Master’s Degree in Computational Psychology from Stanford, co-founded the Stanford Transhumanist Association, was first place winner of the Norway Math Olympiad, and blogs at qualiacomputing.com.

Andrew Zuckerman - Executive Director at QRI. Andrew studied computer science at Harvard, co-founded the Harvard Undergraduate Science of Psychedelics Club, was a board member of Harvard College Effective Altruism, and founded the Harvard Giving Pledge.

Quintin Frerichs - Director of Engineering at QRI. Quintin graduated from Washington University in St. Louis, where he studied computer science and PNP (Philosophy-Neuroscience-Psychology) and worked as a network security engineer for a Seattle-based firm.

Mackenzie Dion - Director of Operations & Development at QRI. Mackenzie is a Morehead-Cain Scholar and senior at UNC where she studies Psychology and Neuroscience. Prior to QRI, Mackenzie worked for several startups and nonprofits in alternative proteins, including the Good Food Institute (GFI) and Aleph Farms.

Sean McGowan - Research and Development Coordinator at QRI. Sean is a recent graduate from Dartmouth College where he studied Cognitive Science and Mathematical Physics.

Timeline of QRI

Timeline of QRI

Timeline of QRI including key works published.

For the full timeline breakdown, see Appendix A.

Board of Advisors

Our newly created Board of Advisors include: Wojciech Zaremba (co-founder of OpenAI), Dr. Robin Carhart-Harris (head of the Centre for Psychedelic Research at Imperial College London), Scott Alexander (writer of Slate Star Codex and Astral Codex Ten), David Pearce (philosopher, author of The Hedonistic Imperative), and Dr. Shamil Chandaria (strategic advisor at DeepMind).

Strategic Advisors

Our current strategic advisors are Romeo Stevens (QRI co-founder), Milan Griffes (writer of Psychedelic Update and co-founder of Argo Health), and Trey Jennings (venture investor at Norwest Venture Partners).

How We’ve Used Money So Far

Since its inception in 2018, QRI has used approximately $125k total, or about $62.5k / year. That money has paid for Quintin’s full-time annual salary, a part-time salary for Sean, stipends for interns, prototyping equipment, software, and travel costs.

What We’ve Accomplished

QRI Is More Constrained by Funding Than by Talent

Today, QRI is not struggling to find research directions. We have a backlog of ideas, both theoretical and experimental, that we’d like to work on. We have been approached by multiple leading academics requesting to collaborate and we don’t have capacity to take on many of the opportunities we encounter. Inside our personal networks, there are talented engineers and data scientists that we would hire (and who have expressed mutual interest in working with us). All we need is the funding to do so.

What We Would Do With More Funding

Pay Our Existing Team

With additional funding, the first thing we’d do is begin paying core team members. These core roles are:

  • Executive Director: Andrew Zuckerman
  • Co-Director of Research: Andrés Gómez Emilsson
  • Co-Director of Research: Michael Edward Johnson
  • Director of Engineering: Quintin Frerichs
  • Operations, Development, and Research Coordination (1 FTE): Mackenzie Dion & Sean McGowan

Expand Our Team

We are confident that the following three roles would bring tremendous value to our research agenda:

  1. An engineer with a background in signal processing to continue developing our non-invasive neurotech which is already producing large effect sizes.
  2. An experienced computational neuroscientist (PhD or postdoc level) who can assist with foundational research and writing academic papers. This person will help us explore the technical implications of our ideas in greater depth and will accelerate our publication process.
  3. A software engineer with a strong mathematical background to continue to build out new research tools that we can use for experiments and data collection.

We have already identified candidates for these roles that we would like to hire.

Invest in Compute

Computing power is another core bottleneck that is slowing down our research. For example, in the past few months, several of our fMRI data analyses have each taken over 7 days to finish processing. With more funding, we would invest in compute to speed up this processing by 10x.

Physical Location

We’ve noticed that much of our best work occurs when we’re working together in the same physical location where information can flow freely from team member to team member. We are interested in setting up a physical location for the research team to work together (as the pandemic allows).

2021 Strategy

2021 Research Agenda

  • Our 2021 Research Agenda will focus on the following projects:
  • Empirically exploring the Symmetry Theory of Valence (STV) on high-valence MDMA data sets with in-house algorithms based on our CDNS framework
  • Developing non-invasive neurotechnology with a focus on reliably inducing positive valence and emotional processing
  • Publishing a research paper that argues for STV on theoretical and empirical grounds Releasing open-source neuroimaging analysis software and publishing corresponding research based on that software
  • Adding new tools to our Psychophysics Toolkit and getting institutional research board approval to begin collecting data from altered states of consciousness that involve perceptual changes
  • Figuring out how to generalize the CSHW framework into a scale-free form—building a model of scale-free resonance in the nervous system and how breakdowns in this system lead to various sensory, emotional, motivational, and hedonic problems

Significant Organizational Transformations

We will also be investing in important organizational upgrades. Quality research doesn’t exist in a vacuum, and we know that a smoothly operating ecosystem is needed to create the conditions for doing good research.

Upgrading Organizational Capacity

We are implementing tighter feedback loops for team members, implementing work-tracking, task-management software, setting quarterly OKRs, creating onboarding material for new hires, and building better systems to track our monthly and yearly expenses.

Increasing Scientific Standards

We will continue to increase contact with outside researchers at academic institutions and neuroscience research centers, finding professional researchers that can review our methods. We will also release preprints of our research on sites like PsyArXiv.

Emphasizing Clarity

Inspired by Olah & Carter’s “Research Debt”, which describes the importance of research distillation, we will experiment with producing high-quality visualizations of our research to increase its legibility. We want to make it easy for busy researchers to understand our contributions and feel comfortable citing our work.

Building Professional QRI Publishing Pipeline

Some of our research won’t be written for academic publication, yet we want to make sure those pieces reach the right audiences. We are creating a professional publishing-first site to host our research. For these works, we will use internal feedback systems to ensure that this content also maintains high standards.

Create a Top-Down Hiring System

We are focusing on top-down hiring to continue to build out a strong research organization. In the past, we have received many requests from volunteers looking to help QRI. We will aim to sustain this organic interest through a newly created r/QualiaResearch subreddit in addition to the Qualia Computing Networking Facebook group that has served as a hub in the past. We will also update our website to reflect the changes in our approach to volunteering. Once we have sufficient funding, we will advertise for new roles on our website, reach out to relevant candidates in our networks, and contact talented postdocs whom we would be excited to hire.

Why We Are a Nonprofit Research Group

Even though we are developing technology at QRI that could lead to for-profit spin-off companies, we believe that a for-profit structure does not provide the optimal incentives for our current work. We are interested in focusing on fundamental and foundational research without rushing to find product-market fit.

For similar reasons, QRI exists outside of the academy. Many research ideas that we explore come from new paradigms, and our current view is that academic pressures leave little room to fruitfully engage with this territory. For example, we don’t expect to publish high-quality trip reports in academic journals, yet we believe that such reports are important puzzle pieces for reverse-engineering consciousness.

Some of our research will be publishable in academic journals, and we are excited to begin doing this. In the future, we hope that our neurotechnology research leads to practical, commercializable treatments. But right now, a non-profit structure is what makes the most sense for what Qualia Research Institute is trying to accomplish.

Fundraising Goals

We are currently raising $1.5 million USD to support our existing team and make the three hires discussed above: a signal processing engineer, a computational neuroscientist, and a software engineer. Combined with our current reserves, raising this amount would give us 2 years of financial runway to pursue our research agenda and experiment with incubating for-profit initiatives.

Budget Breakdown Breakdown of Budget

Much More Room to Grow

While the budget we’re presenting here would pay for a baseline Qualia Research Institute, we believe that we could put an influx of additional funding to great use over the next five years. Consciousness research is still an extremely neglected cause area, and we have ideas for developing this discipline with more engineers, neuroscientists, physicists, philosophers, and mathematicians. Some of these visions have been outlined in Emilsson’s “The Super-Shulgin Academy: A Singularity I Can Believe In” and “Peaceful Qualia: The Manhattan Project of Consciousness”.

The Importance of Understanding Valence

Even though the many practical outcomes of successful consciousness research weren’t discussed in detail in this piece, it is important to stress the incredible value that would come from a mechanistic understanding of valence.

In line with our goals to reduce suffering, improve baseline well-being, and reach new heights of bliss, a full understanding of valence would:

  1. Help us find first-principles solutions to hard-to-treat mental health & chronic pain conditions.
  2. Allow us to build better neurotechnology by precisely articulating the brain states we would like to target.
  3. Create more rigorous measures of philanthropic & economic utility and upgrade imperfect measures of well-being such as the QALY (Quality-Adjusted Life Year), which could drastically improve economic policymaking and the efficiency of our resource allocation.
  4. Help us more accurately measure the quality of life of animals and non-linguistic humans
  5. Improve social coordination by helping people operate from the same basic understanding of what is real and what is valuable.
  6. In the field of AI alignment – make progress on the value-loading problem (what values we should instill into an artificial general intelligence).
  7. Ensure that future neurotechnology is safe and doesn’t induce negative experiences in the short or long-term, degrades cognition or rationality, or makes people more uncaring.

How to Donate to QRI

If you are interested in supporting Qualia Research Institute, you can donate to us here. For larger donations, please reach out to Mackenzie Dion, Director of Operations & Development, at mackenzie@qualiaresearchinstitute.org. We have a slide deck that we are happy to share by request.

Additional Gratitude

Although their names weren’t directly included in this announcement, QRI would not be what it is today if not for the following people. Thank you for all for your support: Margareta Wassinge, Anders Amelin, Winslow Strong, Lawrence Wu, Patrick Taylor, Kenneth Shinozuka, Hunter Meyer, Elin Ahlstrand, Wendi Yan, Marcin Kowrygo, Ross Tieman, Jeremy Hadfield, Bar Lehman, Kushagra Sharma, Tanvi Antoo, Jasmine Wang, Mira Guetzow, Benjamin Martens, Alex Zhao, Robin Goins, Boian Etropolski, and Bence Vass. Thank you to all of our past donors. Thank you to all of our other supporters, both in-person and online. Thank you Rethink Priorities for your 2020 Impact and 2021 Strategy which helped inspire this piece. And thank you Quintin, Mackenzie, Mike, Sean, Andrés, Milan, and Daniel Segal for feedback on drafts of this announcement.

Appendix A: Timeline of QRI

2015-2018: Andrés Gómez Emilsson independently writes about consciousness on his blog Qualia Computing

2016 - 2018: Mike Johnson independently writes about consciousness on his blog Opentheory.net

2017: Romeo Stevens joins Mike and Andrés to help advance their research

December 2018: Romeo, Mike, and Andrés incorporate QRI as a 501c(3) non-profit

May 2019: QRI holds its first internship with three interns (Andrew Zuckerman, Quintin Frerichs, and Kenneth Shinozuka)

July 2019: Quintin joins QRI full-time as the first employee

January 2020: QRI hosts a small fundraiser to keep Quintin on salary, Sean McGowan volunteers to help set up the fundraising event

April-May, May-July 2020: Andrew helps QRI organize two ‘work-sprint’ internships with 15 interns working on technical, content, and organizational projects

August - November 2020: Andrew, Sean, and Mackenzie Dion (a 2020 summer intern) continue to work part-time for QRI

December 2020: Andrew transitions to Executive Director, QRI builds a Board of Advisors and adds Strategic Advisors, Romeo transitions to Strategic Advisor

Appendix B: How Our Research Interfaces with Psychedelic Research

We believe that our work will help make psychedelic science more rigorous, explain why certain substances are effective or ineffective, help lower the dose of particular drugs like MDMA and ketamine (which are neurotoxic and organ toxic, respectively) while maintaining their intended effect, create tools that help clinicians and researchers analyze and monitor psychedelic experiences in real-time, and improve the drug development process.


  1. Universal meaning that the theory retains its explanatory power no matter where we are in the universe and no matter what brain architecture an organism has. ↩︎

  2. See Appendix B. ↩︎

  3. Mike Johnson, co-founder of Qualia Research Institute, wrote about this dilemma back in 2015 in “Effective Altruism, and building a better QALY” and we’re excited to see Rethink Priorities pick up the baton in their recent exploration for alternatives to QALYs and DALYs. Yet even these upgraded metrics don’t measure the ground-truth of subjective experience itself. They also don’t comment on the experiences of animals and non-communicative conscious beings. ↩︎

  4. Qualia formalism is the hypothesis that the internal structure of our subjective experience can be represented precisely by mathematics. ↩︎

  5. Other labs do focus on some of these (like Tononi’s lab taking qualia formalism seriously), but to our knowledge, none care about all three. ↩︎

81

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:05 AM

Thanks for this detailed and well-written report! As a philosopher (and fan of the cyberpunk aesthetic :) ) your project sounds really interesting and exciting to me. I hope I get to meet you one day and learn more.  However, I currently don't see the case for prioritising your project:

Isn’t it perplexing that we’re trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don’t have a precise definition for either suffering or happiness?

As a human collective, we want to create good futures. This encompasses helping humans have happier lives, preventing intense suffering wherever it may exist, creating safe AI, and improving animals’ lives too, both in farms and in the wild.

But what is happiness? And what is suffering?

Until we can talk about these things objectively, let alone measure and quantify them reliably, we’ll always be standing in murky water.

It seems like you could make this argument about pretty much any major philosophical question, e.g. "We're trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don't have a precise definition of the world, or of we, or of trying, and we haven't rigorously established that this is what we should be doing anyway, and what does should  mean anyway?"

Meanwhile, here's my argument for why QRI's project shouldn't be prioritized:

--Crazy AI stuff will probably be happening in the next few decades, and if it doesn't go well, the impact of QRI's research will be (relatively) small or even negative.
--If it does go well, QRI's impact will still be small, because the sort of research QRI is doing would have been done anyway after AI stuff goes well. If other people don't do it, the current QRI researchers could do it, and probably do it even better thanks to advanced AI assistance.

 

Hi Daniel,

Thanks for the remarks! Prioritization reasoning can get complicated, but to your first concern:

Is emotional valence a particularly confused and particularly high-leverage topic, and one that might plausibly be particularly conductive getting clarity on? I think it would be hard to argue in the negative on the first two questions. Resolving the third question might be harder, but I’d point to our outputs and increasing momentum. I.e. one can levy your skepticism on literally any cause, and I think we hold up excellently in a relative sense. We may have to jump to the object-level to say more.

To your second concern, I think a lot about AI and ‘order of operations’. Could we postulate that some future superintelligence might be better equipped to research consciousness than we mere mortals? Certainly. But might there be path-dependencies here such that the best futures happen if we gain more clarity on consciousness, emotional valence, the human nervous system, the nature of human preferences, and so on, before we reach certain critical thresholds in superintelligence development and capacity? Also — certainly.

Widening the lens a bit, qualia research is many things, and one of these things is an investment in the human-improvement ecosystem, which I think is a lot harder to invest effectively in (yet also arguably more default-safe) than the AI improvement ecosystem. Another ‘thing’ qualia research can be thought of as being is an investment in Schelling point exploration, and this is a particularly valuable thing for AI coordination.

I’m confident that, even if we grant that the majority of humanity's future trajectory will be determined by AGI trajectory — which seems plausible to me — I think it’s also reasonable to argue that qualia research is one of the highest-leverage areas for positively influencing AGI trajectory and/or the overall AGI safety landscape.

Is emotional valence a particularly confused and particularly high-leverage topic, and one that might plausibly be particularly conductive getting clarity on? I think it would be hard to argue in the negative on the first two questions. Resolving the third question might be harder, but I’d point to our outputs and increasing momentum. I.e. one can levy your skepticism on literally any cause, and I think we hold up excellently in a relative sense. We may have to jump to the object-level to say more.

I don't think I follow.  Getting more clarity on emotional valence does not seem particularly high-leverage to me. What's the argument that it is?

To your second concern, I think a lot about AI and ‘order of operations’. ...  But might there be path-dependencies here such that the best futures happen if we gain more clarity on consciousness, emotional valence, the human nervous system, the nature of human preferences, and so on, before we reach certain critical thresholds in superintelligence development and capacity? Also — certainly.

Certainly? I'm much less sure. I actually used to think something like this; in particular, I thought that if we didn't program our AI to be good at philosophy, it would come to some wrong philosophical view about what consciousness is (e.g. physicalism, which I think is probably wrong) and then kill us all while thinking it was doing us a favor by uploading us (for example).

But now I think that programming our AI to be good at philosophy should be tackled directly, rather than indirectly by first solving philosophical problems ourselves and then programming the AI to know the solutions.  For one thing, it's really hard to solve millenia-old philosophical problems in a decade or two. For another, there are many such problems to solve. Finally, our AI safety schemes probably won't involve feeding answers into the AI, so much as trying to get the AI to learn our reasoning methods and so forth, e.g. by imitating us.

Widening the lens a bit, qualia research is many things, and one of these things is an investment in the human-improvement ecosystem, which I think is a lot harder to invest effectively in (yet also arguably more default-safe) than the AI improvement ecosystem. Another ‘thing’ qualia research can be thought of as being is an investment in Schelling point exploration, and this is a particularly valuable thing for AI coordination.

I don't buy these claims yet. I guess I buy that qualia research might help improve humanity, but so would a lot of other things, e.g. exercise and nutrition. As for the Schelling point exploration thing, what does that mean in this context?

I’m confident that, even if we grant that the majority of humanity's future trajectory will be determined by AGI trajectory — which seems plausible to me — I think it’s also reasonable to argue that qualia research is one of the highest-leverage areas for positively influencing AGI trajectory and/or the overall AGI safety landscape.

I'm interested to hear those arguments!

Hi Daniel,

Thanks for the reply! I am a bit surprised at this:

Getting more clarity on emotional valence does not seem particularly high-leverage to me. What's the argument that it is?

The quippy version is that, if we’re EAs trying to maximize utility, and we don’t have a good understanding of what utility is, more clarity on such concepts seems obviously insanely high-leverage. I’ve written about specific relevant to FAI here: https://opentheory.net/2015/09/fai_and_valence/ Relevance to building a better QALY here: https://opentheory.net/2015/06/effective-altruism-and-building-a-better-qaly/ And I discuss object-level considerations on how better understanding of emotional valence could lead to novel therapies for well-being here: https://opentheory.net/2018/08/a-future-for-neuroscience/ https://opentheory.net/2019/11/neural-annealing-toward-a-neural-theory-of-everything/ On mobile, pardon the formatting.

Your points about sufficiently advanced AIs obsoleting human philosophers are well-taken, though I would touch back on my concern that we won’t have particular clarity on philosophical path-dependencies in AI development without doing some of the initial work ourselves, and these questions could end up being incredibly significant for our long-term trajectory — I gave a talk about this for MCS that I’ll try to get transcribed (in the meantime I can share my slides if you’re interested). I’d also be curious to flip your criticism and ping your models for a positive model for directing EA donations — is the implication that there are no good places to donate to, or that narrow-sense AI safety is the only useful place for donations? What do you think the highest-leverage questions to work on are? And how big are your ‘metaphysical uncertainty error bars’? What sorts of work would shrink these bars?

Sorry for the delayed reply! Didn't notice this until now.

Sure, I'd be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them--but I feel like QRI isn't aimed at this directly and could achieve this much better if it was; if it happens it'll be a side-effect of QRI's research.

For your flipped criticism: 

--I think bolstering the EA community and AI risk communities is a good idea
--I think "blue sky" research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it
--Obviously I think AI safety, AI governance, etc. are valuable
--There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.

--There are various other things that don't impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.

--I'm probably missing a few things
--My metaphysical uncertainty... If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is "very uncertain." But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
 

Thanks for your back and forth. After finishing my Master‘s I had an offer for a PhD position in consciousness (& meditation) and decided against this because arguments close to yours, Daniel.

I agree that we probably shouldn’t aim at solving philosophy and feeding this to an AI, but I wonder if one could make a stronger case along something like: at some point advanced AI systems will come into contact with philosophical problems, and the better and the more of them we humans understand at the time the AI was designed, the better the chances of building aligned systems that can take over responsibility responsibly. Maybe one could think about (fictional?) cultures that have for some reason never explored Utilitarianism much but still are technologically highly developed. I suppose I’d think they’d have somewhat worse chances of building aligned systems, though I don’t trust my intuitions much here.

Well said. I agree that that is a path to impact for the sort of work QRI is doing, it just seems lower-priority to me than other things like working on AI alignment or AI governance.  Not to mention the tractability / neglectedness concerns (philosophy is famously intractable, and there's an entire academic discipline for it already)

This type of reasoning seems to imply that everyone interested in the flourishing of beings and thinking about that from an EA perspective should focus on projects that contribute directly to AI safety. I take that to be the implication of your comment because it is your main argument against working on something else (and could equally be applied to any number of projects discussed on the EA Forum not just this one). That implies, to me at least, extremely high confidence in AI safety being the most important issue because at lower confidence we would want to encourage a wider range of bets by those who share our intellectual and ideological views.

If the implication I'm drawing from your comment matches your views and confidence, can you help me understand why you are so confident in this being the most important issue?


If I'm misunderstanding the implication of your comment, can you help me understand where I'm confused?
 

Good question.  Here are my answers:

  1. I don't think I would say the same thing to every project discussed on the EA forum. I think for every non-AI-focused project I'd say something similar (why not focus instead on AI?) but the bit about how I didn't find QRI's positive pitch compelling was specific to QRI. (I'm a philosopher, I love thinking about what things mean, but I think we've got to have a better story than "We are trying to make more good and less bad experiences, therefore we should try to objectively quantify and measure experience." Compare: Suppose it were WW2, 1939. We are thinking of various ways to help the allied war effort. An institute designed to study "what does war even mean anyway? What does it mean to win a war? let's try to objectively quantify this so we can measure how much we are winning and optimize that metric" is not obviously a good idea. Like, it's definitely not harmful, but it wouldn't be top priority, especially if there are various other projects that seem super important, tractable, and neglected, such as preventing the Axis from getting atom bombs. (I think of the EA community's position with respect to AI as analogous to the position re atom bombs held by the small cohort of people in 1939 "in the know" about the possibility. It would be silly for someone who knew about atom bombs in 1939 to instead focus on objectively defining war and winning.)
  2. But yeah, I would say to every non-AI-related project something like "Will your project be useful for making AI go well? How?" And I think that insofar as one could do good work on both AI safety stuff and something else, one should probably choose AI safety stuff. This isn't because I think AI safety stuff is DEFINITELY the most important, merely that I think it probably is. (Also I think it's more neglected AND tractable than many, though not all, of the alternatives people typically consider)
  3. Some projects I think are still worth pursuing even if they don't help make AI go well. For example, bio risk, preventing nuclear war, improving collective sanity/rationality/decision-making, ... (lots of other things would be added, it all depends on tractibility + neglectedness + personal fit.) After all, maybe AI won't happen for many decades or even centuries. Or maybe one of those other risks is more likely to happen soon than it appears.
  4. Anyhow, to sum it all up: I agree that we shouldn't be super confident that AI is the most important thing. Depending on how broadly you define AI, I'm probably about 80-90% confident. And I agree that this means our community should explore a portfolio of ideas rather than just one. Nevertheless, I think even our community is currently less focused on AI than it should be, and I think AI is the "gold standard" so to speak that projects should compare themselves to, and moreover I think QRI in particular has not done much to argue for their case. (Compare with, say, ALLFED which has a pretty good case IMO: There's at least a 1% chance of some sort of global agricultural shortfall prior to AI getting crazy, and by default this will mean terrible collapse and famine, but if we prepare for this possibility it could instead mean much better things (people and institutions surviving, maybe learning)).
  5. My criticism is not directly of QRI but of their argument as presented here. I expect that if I talked with them and heard more of their views, I'd hear a better, more expanded version of the argument that would be much more convincing. In fact I'd say 40% chance QRI ends up seeming better than ALLFED to me after such a conversation. For example, I myself used to think that consciousness research was really important for making AI go well. It might not be so hard to convince me to switch back to that old position.

Thanks for the detailed response kokotajlod, I appreciate it.

Let me summarize your viewpoint back to you to check I've understood correctly. It sounds as though you are saying that AI (broadly defined) is likely to be extremely important and the EA community currently underweights AI safety relative to its importance. Therefore, while you do think that not everyone will be suited to AI safety work and that the EA community should take a portfolio approach across problems, you think it's important to highlight where projects do not seem as important as working on AI safety since that will help nudge the EA community towards a better-balanced portfolio. Outside of AI safety, there are a few other things that you think are also important, mostly in the existential risk kind of category but also including improving collective sanity/rationality/decision-making and maybe others. Therefore, the critique of QRI is mostly part of the activity to keep the portfolio properly balanced, however, you do have some additional skepticism that learning about what we mean by happiness and suffering is useful.

Is that roughly right?

If that is approximately your view, I think I have a couple of disagreements/things I'm confused about.

A. Firstly, I don't think the WW2 example is quite right for this case. I think in the case of war, we understand the concept well enough to take the relevant actions and we don't predict defining the concept to change that. I don't think we understand the concepts of suffering or happiness well enough to take similar actions as in the WW2 case.

B. Secondly, I would have guessed that the EA community overweights AI safety so I'm curious as to why you think that is not the case. It could be that my intuitions are wrong about the focus it actually receives (vs the hype in the community) or it could be that I think it should receive less focus than you do. Not so much compared to its importance, more like its tractability when factoring in safety and the challenges of coordination. I worry that perhaps we overly focus on the technical side such that there's a risk that we just speed up development more than we increase safety.

C. While I don't know much about QRI's research, in particular, my concerns from point B make me more inclined to support research in areas related to social sciences that might improve our understanding of and ability to coordinate.

D. And finally, why include "improving collective sanity/rationality/decision-making" in the list of other important things but exclude QRI? Here I'm not necessarily disagreeing, I just don't quite get the underlying model that generates existential threats as the most important but then includes something like this and then excludes something like QRI.

To be clear, these are not confident viewpoints, they are simply areas where I notice my views seem to differ from many in the EA community and I expect I'd learn something useful from understanding why that is.

Thanks for the detailed engagement!

Yep, that's roughly correct as a statement of my position. Thanks. I guess I'd put it slightly differently in some respects -- I'd say something like "A good test for whether to do some EA project is how likely it is that it's within a few orders of magnitude as good as AI safety work. There will be several projects for which we can tell a not-too-implausible story for how they are close to as good or better than AI safety work, and then we can let tractibility/neglectedness/fit considerations convince us to do them. But if we can't even tell such a story in the first place, that's a pretty bad sign." The general thought is: AI safety is the "gold standard" to compare against, since it's currently No. 1 priority in my book. (If something else was No. 1, it would be my gold standard.)

I think QRI actually can tell such a story, I just haven't heard it yet. In the comments it seems that a story like this was sketched. I would be interested to hear it in more detail. I don't think the very abstract story of "we are trying to make good experiences but we don't know what experiences are" is plausible enough as a story for why this is close to as good as AI safety. (But I might be wrong about that too.)

re: A: Hmmm, fair enough that you disagree, but I have the opposite intuition.

re: B: Yeah I think even the EA community underweights AI safety. I have loads of respect for people doing animal welfare stuff and global poverty stuff, but it just doesn't seem nearly as important as preventing everyone from being killed or worse in the near future. It also seems much less neglected--most of the quality-adjusted AI safety work is being done by EA-adjacent people, whereas that's not true (I think?) for animal welfare or global poverty stuff.  As for traceability, I'm less sure how to make the comparison--it's obviously much more tractable to make SOME improvement to animal welfare or the lives of the global poor, but if we compare helping ALL the animals / ALL the global poor to AI safety, it actually seems less tractable (while still being less important and less neglected.) There's a lot more to say about this topic obviously, I worry I come across as callous or ignorant of various nuances... so let me just say I'd love to discuss with you further and hear your thoughts.

re: D:  I'm certainly pretty uncertain about the improving collective sanity thing. One reason I'm more optimistic about it than QRI is that I see how it plugs in to AI safety: If we improve collective sanity, that massively helps with AI safety, whereas if we succeed at understanding consciousness better, how does that help with AI safety? (QRI seems to think it does, I just don't see it yet) Therefore sanity-improvement can be thought of as similarly important to AI safety (or alternatively as a kind of AI safety intervention) and the remaining question is how tractable and neglected it is. I'm unsure, but one thing that makes me optimistic about tractability is that we don't need to improve sanity of the entire world, just a few small parts of the world--most importantly, our community, but also certain AI companies and (maybe) governments. And even if all we do is improve sanity of our own community, that has a substantially positive effect on AI safety already, since so much of AI safety work comes from our community. As for neglectedness, yeah IDK. Within our community there is a lot of focus on good epistemology and stuff already, so maybe the low-hanging fruit has been picked already. But subjectively I get the  impression that there are still good things to be doing--e.g. trying to forecast how collective epistemology in the relevant communities could change in the coming years, building up new tools (such as Guesstimate or Metaculus) ...











 

Curated and popular this week
Relevant opportunities