This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this! |
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated. |
I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.
Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)
Research examples I’m NOT concerned by
Let me start with some good examples (there are certainly more than I am listing here!).
In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.
Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.
Other examples I was planning to write up:
- Modeling the Human Trajectory - David Roodman
- Roodman seems qualified to deliver this research.
- Wild Animal Initiative research such as this
- I like that they collaborated with Samniqueka J. Halsey who is an assistant professor
Some examples I’m concerned by
Open Phil’s research on AI
In 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?
All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).
Climate Change research
(removed mention of Founder's Pledge as per jackva's comment)
- Climate Change and Longtermism - John Halstead
- John Halstead doesn't seem to have any formal training in climate science. Not sure if this is an issue though if he's got up to speed in his own time, but I wonder if a professor of climate science would have been a better choice to write this book.
I agree with this post in spirit, but disagree with your concrete examples. I mostly just don't think that "expert" is actually a coherent category for these kinds of projects.
Respectively, I think that WWOTF probably did a great job re moral philosophy, but think it majorly underrates x risk esp AI. But this is neither an expert consensus, nor Will's area of expertise. It also gives a bunch of takes about abolition, which is very much a history question, etc.
I think that Joseph's report was pretty great, and very much the work that should be done by a philosopher. It was mostly disentangling, clarifying and distilling arguments that previous people (sometimes highly technical) had mostly made from fuzzy intuitions. I do not think that working in AI trains these skills. I think it gives a lot of intuitions about the capabilities of current systems, and some intuitions about future systems, but experts are often pretty bad at forecasting! Eg I'm not sure I can think of anyone who could eg qualitatively predict what GPT3 can do.
Ditto, I think that Ajeya"s work was an excellent, ambitious and interdisciplinary work. I can't think of many experts where I expect them to have done a better job (not that I don't think you can improve on the report, just that I don't think the parts I would want improved are that dependent on specific expertise)
You might be right. It is indeed harder to identify experts to lead on research projects which are very inter-disciplinary in nature.