This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this! |
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated. |
I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.
Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)
Research examples I’m NOT concerned by
Let me start with some good examples (there are certainly more than I am listing here!).
In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.
Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.
Other examples I was planning to write up:
- Modeling the Human Trajectory - David Roodman
- Roodman seems qualified to deliver this research.
- Wild Animal Initiative research such as this
- I like that they collaborated with Samniqueka J. Halsey who is an assistant professor
Some examples I’m concerned by
Open Phil’s research on AI
In 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?
All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).
Climate Change research
(removed mention of Founder's Pledge as per jackva's comment)
- Climate Change and Longtermism - John Halstead
- John Halstead doesn't seem to have any formal training in climate science. Not sure if this is an issue though if he's got up to speed in his own time, but I wonder if a professor of climate science would have been a better choice to write this book.
Hey Larks. I just want to reiterate first that this was a draft amnesty day draft which is mostly why I didn't go to the level of detail of concrete examples. I didn't finish the draft because I was generally quite uncertain about the conclusions. Also I don't doubt Ajeya did a great job, I was just musing really if ex ante Ajeya should have been chosen to write the report rather than if ex post we're happy she did. Finally, I have very little technical AI knowledge myself.
I'm unsure if such a critical draft was the best choice for an amnesty day draft in hindsight! Bear in mind I'm far from the best person to ask this question, but my gut feeling is someone (or a group of people) who has formal academic training in machine learning and computational neuroscience etc. EA has money, so we could get the cream of the crop to do research if we really wanted to.
Maybe (these are taken from the google scholar page for AI...):
I think I'll stop there because you probably get the picture of who I'm thinking about. The above might be terrible options for all I know, but my general point is there are people who live and breath AI/ML and who are reknowned in their field. Should we have tried to make more use of them?
EDIT: It's certainly possible I underestimated how inter-disciplinary Ajeya's research is (as per Neel Nanda's comment) which I agree would reduce the usefulness of the AI experts