I would have so much respect for CEA if they had responded like this.
I just wanted to say thank you for doing this Jeff. I sympathize with Rockwell Schwartz’s general point, but since Cathleen’s post asks that people not use her full name or name her former colleagues I appreciate you taking this seriously.
(For clarity, I don’t mind people using my full name. It’s my forum username and very easily found e.g. on Leverage’s website. But I currently work at Leverage Research and decided to work there knowing full well how some people in EA react when the topic of Leverage comes up. The same is not true of everyone, and I think individuals who have not chosen to be public figures should be allowed to live in peace should they wish to).
Larissa from Leverage Research here. I think there might be an interesting discussion to be had about the relationship between feedback loops, external communication (engaging with your main external audiences), and public communication (trying to communicate ideas to the wider public).
For a lot of the history of scientific developments, sharing research, let alone widely distributing it was expensive and rare. Early discoveries in the history of electricity, for example, were nonetheless still made, often by researchers who shared little until they had a complete theory, or a new instrument to display. Often the feedback loops were simply direct engagement with the phenomena itself. Only in more recent history has it become cheap and easy enough to widely share research such that this has become the norm. Similarly, as a couple of people have mentioned in the comments, there are more recent examples of groups that have done great research while having little external engagement: Lockheed Martin and the Manhattan Project being two well-known examples.
This suggests that it is feasible to have feedback loops while doing little external communication of any kind. During Leverage 1.0 people relied more on feedback from their own experiences, interactions with teammates’ experiences and views, workshops and coaching.
That said, we do believe (for reasons independent of research feedback loops) that it was a mistake to not do more external communication in the past, which is why this is something Leverage Research has focused on since 2019. More recently, we have also come to think that it is also important to try to communicate to the wider public (in ways that can be broadly understood) as opposed to just your core audience or peer group. One reason for this is that if projects are only communicated about, and criticisms only accepted in, the language of the particular group that developed them, it's easy for blindspots to remain until it is too late. (I recommend Glen Weyl's "Why I'm Not A Technocrat" for a more detailed treatment of this topic.)
For anyone interested in some of our other reflections on public engagement, I recommend reading our 2019-2020 annual report or our Experiences Inquiry Report. The former is Leverage Research's first annual report since the re-organization in 2019, and one topic we discuss is our new focus on external engagement. The latter shares findings from our inquiry last year into the experiences of former collaborators during Leverage 1.0. To see our engagement efforts today, I recommend checking out our website, subscribing to our newsletter, or following us on Twitter or Medium.
For those interested in the exploratory psychology research Jeff mentions, we recommend reading our write-up from earlier this year covering our 2017 - 2019 Intention Research and keeping an eye on our Exploratory Psychology Research Program page. We are currently working on two pieces: one on risks from introspection (we discuss this a bit on Twitter here), and one on Belief Reporting (an introspective tool developed during Leverage 1.0). We're also thinking of sharing a few documents written pre-2019 that relate to introspection techniques. These would perhaps be less accessible for a wider audience unfamiliar with our introspective tools but may nonetheless be of interest to those who want to dive deeper on our introspective research. All of this will be added to our website when completed.
Finally, I just wanted to thank Jeff for engaging with us in a discussion of his post. Although we disagreed on some things and it ended up a lengthy discussion, I do feel like I came to understand a bit more of where the disagreement stemmed from, and the post was improved through the process. This seems valuable, so I would like to see that norm encouraged.
As context, "Leverage 1.0" is the somewhat clumsy term I introduced as a shorthand for the decentralized research collaboration between a few organizations from 2011 to 2019 that's commonly referred to as "Leverage," so as to distinguish it from Leverage Research the organization since 2019 which looks very different.
Thank you for the question; this is an important topic.
We believe that advances in psychology could make improvements to many people's lives by helping with depression, increasing happiness, improving relationships, and helping people think more clearly and rationally. As a result, we're optimistic that the sign can be positive. Our past work was primarily focused on these kinds of upsides, especially self-improvement; developing skills, improving rationality, and helping people solve problems in their lives.
That said, there are potential downsides to advancing knowledge in a lot of areas, which are important to think through in advance. I know the EA community has thought about some of the relevant areas such as flow-through effects and how to think about them (e.g. the impact of AMF on population and the meat-eater problem) and cases where extra effort might be harmful (e.g. possible risks to AI safety from increasing hardware capacities and whether or not working on AI safety might contribute to capabilities).
Leverage 1.0 thought a lot about the impact of psychology research and came to the view that sharing the research would be positive. Evaluating this is an area where it's hard to build detailed models though so I'd be keen to learn more about EA research on these kinds of questions.
Every time you post these each month, I end up thinking something like "these are so useful, I'm really grateful David does this". I thought this month I should actually tell you that, so thank you so much for posting these!
We are conducting psychology research based on the following assumptions:1) psychology is an important area to understand if you want to improve the world2) it is possible to make progress in understanding the human mind3) the current field of psychology lags behind its potential 4) part of the reason psychology is lagging behind its potential is that it has not completed the relevant early stage science steps5) during Leverage 1.0, we developed some useful tools that could be used by academics in the field to make progress in psychology. Assumptions 2) and 5) are based on our experience in conducting psychology research as part of Leverage 1.0. The next step will be to test these assumptions by seeing if we can train academics on a couple of the introspection tools we developed and have them use them to conduct academic research.Assumptions 3) and 4) are something we have come to believe from our investigations so far into existing psychology research and early stage science. We are currently very uncertain about this and so further study on our part is warranted. What we are trying to accomplish is to further the field of psychology, initially by providing tools that others in the field can use to develop and test new theories. The hope is that we might make contributions to the field that would help it advance. Contributing to significant progress in psychology is, of course, a very speculative bet but, given our views on the importance of understanding psychology, one that still seems worth making.
I hope that helps. Let me know if you have further questions.
Thanks for taking the time to check out the paper and for sending us your thoughts.
I really like the examples of building new instruments and figuring out how that works versus creating something that’s a refinement of an existing instrument. I think these seem very illustrative of early stage science.
My guess is that the process you were using to work out how your forked brass works, feels similar to how it might feel to be conducting early stage science. One thing that stood out to me was that someone else trying to replicate the instrument found, if I understood this correctly, they could only do so with much longer tubes. That person then theorised that perhaps the mouth and diaphragm of the person playing the instrument have an effect. This is reminiscent of the problems with Galileo’s telescope and the difference in people’s eyesight.
Another thought this example gave me is how video can play a big part in today’s early stage science, in the same way, that demonstrations did in the past. It’s much easier to demonstrate to a wide audience that you really can make the sounds you claim with the instrument you’re describing if they can watch a video of it. If all people had was a description of what you had built, but they couldn’t create the same sound on a replica instrument, they might have been more sceptical. Being able to replicate the experiment will matter more in areas where the claims made are further outside of people’s current expectations. “I can play these notes with this instrument” is probably less unexpected than “Jupiter has satellites we hadn't seen before and I can see them with this new contraption”. This is outside of the scope of our research, it’s just a thought prompted by the example.
I’ve asked my colleagues to provide an answer to your questions about how controversial the claim that early stage science works differently is and whether it seems likely that there would still be early stage science today. I believe Mindy will add a comment about that soon. We’ll also amend the typo, thanks for pointing that out!
Perfect, thank you. I've edited it and added a footnote.