Thank you for the question; this is an important topic.
We believe that advances in psychology could make improvements to many people's lives by helping with depression, increasing happiness, improving relationships, and helping people think more clearly and rationally. As a result, we're optimistic that the sign can be positive. Our past work was primarily focused on these kinds of upsides, especially self-improvement; developing skills, improving rationality, and helping people solve problems in their lives.
That said, there are potential downsides to advancing knowledge in a lot of areas, which are important to think through in advance. I know the EA community has thought about some of the relevant areas such as flow-through effects and how to think about them (e.g. the impact of AMF on population and the meat-eater problem) and cases where extra effort might be harmful (e.g. possible risks to AI safety from increasing hardware capacities and whether or not working on AI safety might contribute to capabilities).
Leverage 1.0 thought a lot about the impact of psychology research and came to the view that sharing the research would be positive. Evaluating this is an area where it's hard to build detailed models though so I'd be keen to learn more about EA research on these kinds of questions.
Every time you post these each month, I end up thinking something like "these are so useful, I'm really grateful David does this". I thought this month I should actually tell you that, so thank you so much for posting these!
We are conducting psychology research based on the following assumptions:1) psychology is an important area to understand if you want to improve the world2) it is possible to make progress in understanding the human mind3) the current field of psychology lags behind its potential 4) part of the reason psychology is lagging behind its potential is that it has not completed the relevant early stage science steps5) during Leverage 1.0, we developed some useful tools that could be used by academics in the field to make progress in psychology. Assumptions 2) and 5) are based on our experience in conducting psychology research as part of Leverage 1.0. The next step will be to test these assumptions by seeing if we can train academics on a couple of the introspection tools we developed and have them use them to conduct academic research.Assumptions 3) and 4) are something we have come to believe from our investigations so far into existing psychology research and early stage science. We are currently very uncertain about this and so further study on our part is warranted. What we are trying to accomplish is to further the field of psychology, initially by providing tools that others in the field can use to develop and test new theories. The hope is that we might make contributions to the field that would help it advance. Contributing to significant progress in psychology is, of course, a very speculative bet but, given our views on the importance of understanding psychology, one that still seems worth making.
I hope that helps. Let me know if you have further questions.
Thanks for taking the time to check out the paper and for sending us your thoughts.
I really like the examples of building new instruments and figuring out how that works versus creating something that’s a refinement of an existing instrument. I think these seem very illustrative of early stage science.
My guess is that the process you were using to work out how your forked brass works, feels similar to how it might feel to be conducting early stage science. One thing that stood out to me was that someone else trying to replicate the instrument found, if I understood this correctly, they could only do so with much longer tubes. That person then theorised that perhaps the mouth and diaphragm of the person playing the instrument have an effect. This is reminiscent of the problems with Galileo’s telescope and the difference in people’s eyesight.
Another thought this example gave me is how video can play a big part in today’s early stage science, in the same way, that demonstrations did in the past. It’s much easier to demonstrate to a wide audience that you really can make the sounds you claim with the instrument you’re describing if they can watch a video of it. If all people had was a description of what you had built, but they couldn’t create the same sound on a replica instrument, they might have been more sceptical. Being able to replicate the experiment will matter more in areas where the claims made are further outside of people’s current expectations. “I can play these notes with this instrument” is probably less unexpected than “Jupiter has satellites we hadn't seen before and I can see them with this new contraption”. This is outside of the scope of our research, it’s just a thought prompted by the example.
I’ve asked my colleagues to provide an answer to your questions about how controversial the claim that early stage science works differently is and whether it seems likely that there would still be early stage science today. I believe Mindy will add a comment about that soon. We’ll also amend the typo, thanks for pointing that out!
Perfect, thank you. I've edited it and added a footnote.
Thanks JP and Edoarad! 😄
Thanks Jeff :-) I hope it’s helpful.
Yeah this makes sense, thanks for asking for clarification. The communication section is meant to be a mixture of i) and ii). I think in many cases it was the right decision for Leverage not to prioritise publishing a lot of their research where doing so wouldn’t have been particularly useful. However we think it was a mistake to do some public communication and then remove it, and not to figure out how to communicate about more of our work.
I’m not sure what the best post etiquette is here, should I just edit the post to put in your suggestion and note that the post was edited based on comments?
(totally unrelated to the actual post but how did you include an emoticon JP?)
(Haha, I did wonder about having so many headings, but it just felt so organised that way, you know 😉)
With regards to removing content we published online, I think we hit the obvious failure mode I expect a lot of new researchers and writers run into, which was that we underestimated how time-consuming, but also stressful, posting publicly and then replying to all the questions can be. To be honest, I kind of suspect early and unexpected negative experiences with public engagement led Leverage to be overly sceptical of it being useful and nudged them away from prioritising communicating their ideas.
From what I understand, some of the key things we ended up removing were:
1) content on Connection Theory (CT)
2) a long-term plan document
3) a version of our website that was very focused on “world-saving.”
With the CT content, I don’t think we made sufficiently clear that we thought of CT as a Kuhnian paradigm worth investigating rather than a fully-fledged, literally true-about-the-world claim.
Speaking to Geoff, it sounds like he assumed people would naturally be thinking in terms of paradigms for this kind of research, often discussed CT under that assumption and then was surprised when people mistook claims about CT to be literal truth claims. To clarify, members of Leverage 1.0 typically didn’t think about CT as being literally true as stated, and the same is true of today’s Leverage 2.0 staff. I can understand why people got this impression from some of their earlier writing though.
This confusion meant people critiqued CT as having insufficient evidence to believe it upfront (which we agree with). While the critiques were understandable, it wasn’t a reason to believe that the research path wasn’t worth following, and we struggled to get people to engage with CT as a potential paradigm. I think the cause of the disagreement wasn’t as clear to us at the time, which made our approach challenging to convey and discuss.
With the long-term planning documents, people misinterpreted the materials in ways that we didn’t expect and hadn’t intended (e.g. as being a claim about what we’d already achieved or as a sign that we were intending something sinister). It seems as though people read the plan as a series of predictions about the future and fixed steps that we were confident we would achieve alone. Instead, we were thinking of it as a way to orient on the scale of the problems we were trying to tackle. We think it’s worth trying to think through very long-term goals you have to see the assumptions that are baked in into your current thinking and world model. We expect solving any problems on a large scale to take a great deal of coordinated effort and plans to change a lot as you learn more.
We also found that a) these kinds of things got a lot more focus than any of our other work which distorted people’s perceptions of what we were doing and b) people would frequently find old versions online and then react badly to them (e.g. becoming upset, confused or concerned) in ways we found difficult to manage.
In the end, I think Leverage concluded the easiest way to solve this was just to remove everything. I think this was a mistake (especially as it only intensified everyone’s curiosity) and it would have been better to post something explaining this problem at the time, but I can see why it might have seemed like just removing the content would solve the problem.