Hide table of contents
This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this!
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated. 

I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.

Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)

Research examples I’m NOT concerned by

Let me start with some good examples (there are certainly more than I am listing here!).

In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.

Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.

Other examples I was planning to write up:

Some examples I’m concerned by

Open Phil’s research on AI

In 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?

All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).

Climate Change research

(removed mention of Founder's Pledge as per jackva's comment)

34

0
0

Reactions

0
0

More posts like this

Comments16
Sorted by Click to highlight new comments since: Today at 8:54 AM

Could you give some concrete examples of people who would have been better than Ajeya to write the timelines report? Ex post I think she did a good job.

Hey Larks. I just want to reiterate first that this was a draft amnesty day draft which is mostly why I didn't go to the level of detail of concrete examples. I didn't finish the draft because I was generally quite uncertain about the conclusions. Also I don't doubt Ajeya did a great job, I was just musing really if ex ante Ajeya should have been chosen to write the report rather than if ex post we're happy she did. Finally, I have very little technical AI knowledge myself.

I'm unsure if such a critical draft was the best choice for an amnesty day draft in hindsight! Bear in mind I'm far from the best person to ask this question, but my gut feeling is someone (or a group of people) who has formal academic training in machine learning and computational neuroscience etc. EA has money, so we could get the cream of the crop to do research if we really wanted to.

Maybe (these are taken from the google scholar page for AI...):

  • Geoffrey Hinton - most citations under AI on Google Scholar of anyone. Expertise in ML, as well as cognitive science and computer science. Received the Turing Award. Maybe he's too busy, but again, money talks.
  • Terrence Sejnowski - expertise in computational neuroscience and AI. Would this have been a good combo of expertise to do this research?

I think I'll stop there because you probably get the picture of who I'm thinking about. The above might be terrible options for all I know, but my general point is there are people who live and breath AI/ML and who are reknowned in their field. Should we have tried to make more use of them?

EDIT: It's certainly possible I underestimated how inter-disciplinary Ajeya's research is (as per Neel Nanda's comment) which I agree would reduce the usefulness of the AI experts

Geoffrey Hinton

According to Wikipedia, "Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future". So it seems plausible that he is not really interested in AI timelines and forecasting. If this is the case, then I think having Ajeya Cotra write the report is preferable to Geoffrey Hinton.

As a more general point, it is not clear whether good experts in AI are good experts in AGI forecasting. The field of AI is different here from climate change, in that climate change science deals inherently a lot more with forecasting.

Does having good intuitions which neural network architectures make it easier to tell traffic lights apart from dogs in pictures help you assess, whether the orthogonality thesis is true or relevant? Does inventing Boltzmann machines help you decide whether it is plausible that an AGI build in 2042 has mesaoptimization? Probably at least a little bit, but its not clear how far this will go.

John's report has nothing to do with Founders Pledge, which he left in 2020.

Thanks I added a note

I agree with this post in spirit, but disagree with your concrete examples. I mostly just don't think that "expert" is actually a coherent category for these kinds of projects.

Respectively, I think that WWOTF probably did a great job re moral philosophy, but think it majorly underrates x risk esp AI. But this is neither an expert consensus, nor Will's area of expertise. It also gives a bunch of takes about abolition, which is very much a history question, etc.

I think that Joseph's report was pretty great, and very much the work that should be done by a philosopher. It was mostly disentangling, clarifying and distilling arguments that previous people (sometimes highly technical) had mostly made from fuzzy intuitions. I do not think that working in AI trains these skills. I think it gives a lot of intuitions about the capabilities of current systems, and some intuitions about future systems, but experts are often pretty bad at forecasting! Eg I'm not sure I can think of anyone who could eg qualitatively predict what GPT3 can do.

Ditto, I think that Ajeya"s work was an excellent, ambitious and interdisciplinary work. I can't think of many experts where I expect them to have done a better job (not that I don't think you can improve on the report, just that I don't think the parts I would want improved are that dependent on specific expertise)

You might be right. It is indeed harder to identify experts to lead on research projects which are very inter-disciplinary in nature.  

[This comment isn't meant to signal any opinion about the rest of your post.]

Carlsmith's report in particular is highly interdisciplinary and draws on technical AI, economics, and also philosophy. It doesn't have much in the way of technical AI or economics claims. It's not really clear who would be most qualified to write this, but in general a philosopher doesn't seem like such a bad choice. In fact, I'd think the average philosopher with strong quantitative skills would be better at this than the average economist or certainly AI researcher.

Whether a more experienced philosopher should have done it is another question, but I'd imagine that even with money Open Phil cannot summon very experienced experts to write reports for them at the drop of a hat.

The flip side here is that What We Owe the Future isn't really a philosophy book, or at the very least it reads pretty differently to me than other analytical philosophy books. 

And indeed Will consulted many experts extensively.

My argument here is that Will probably has one of the if not the best understanding of longtermism and EA at a 'theoretical' level than anyone else in the world. This made him incredibly well-placed to essentially 'set the direction' of the research and identify what to focus on in WWOTF. He was then able to engage with individual experts to write the individual chapters. He has demonstrated an ability to write compelling, engaging books (Doing Good Better) so should be able to tie up expert research into a readable book. Overall he seems like an incredibly good choice to write WWOTF.

I'd imagine that even with money Open Phil cannot summon very experienced experts to write reports for them at the drop of a hat.

Maybe. Maybe not. This makes me think of the Stern Review which incidentally wasn't really written by a world-renowned expert but was led by one:

On 19 July 2005 the Chancellor of the Exchequer, Gordon Brown announced that he had asked Sir Nicholas Stern to lead a major review of the economics of climate change, to understand more comprehensively the nature of the economic challenges and how they can be met, in the UK and globally.[13] The Stern Review was prepared by a team of economists at HM Treasury; independent academics were involved as consultants only. The scientific content of the Review was reviewed by experts from the Walker Institute.[14]

Maybe this would be a good model for research for EA organisations?

Related to what others (e.g., harfe) have already commented, it seems a sad truth that many domain experts reason poorly as soon as you go slightly outside the prevalent framings of their domain. For instance, someone may have a good track record improving current-day ML systems but lack interest in forecasting anything that's several years in the future. Or they may not be thinking about questions like whether particular trends break around the time ML systems become situationally aware of being in "training" (because we're far away from this, so it has never happened thus far.)  If domain-experts had a burning desire to connect their expertise to "What's important for the course of the future of humanity?" and get things right,  get to the truth, they'd already be participating more in EA discourse. Which isn't to say that everyone with an interest in these topics would endorse the conclusions prevalent within EA – but at least they'd be familiar with these conclusions and the arguments. The fact that they're only domain experts, and not also existing contributors to EA discourse, is often evidence that they on some level lack interest in the questions at hand. (In practice, this often manifests itself as them saying stupid things when they get dragged into a discussion, but more fundamentally, the reason for this is because they massively underestimate the depth behind EA thinking simply because it's outside their plate and because they lack a burning desire to think through big-picture questions.) 

FWIW, I think Open Phil often commissioned domain experts to review their reports. (They probably tried to select experts who are interested enough in EA thinking to engage with it carefully, so this creates a selection effect, which you could argue introduces a bias. But the counterargument is that it's no use commissioning experts who you expect will misrepresent your work when they review it. So you want to select experts who've previously engaged intelligently with shorter version of the argument – and that sadly disqualifies a significant portion of narrow-domain experts.)  

Thanks for writing this, this is also something that's been on my mind with some degree of uncertainty.

My confidence for a lot of these reports would increase if I could see the peer review comments and responses or if otherwise the research (in parts or in full) was published in a peer reviewed academic journal. I know a lot of Open Phil reports commission external peer review and if I recall correct the climate change report was also peer reviewed. At the same time some of the comments in that thread implied reviewers had a lot of disagreements and it's hard to say how much of the feedback was responded to or not. To be clear, you don't have to agree with every peer review comment but seeing the responses would increase my confidence.

I'm still left with the impression that most work within EA isn't externally reviewed.

I wonder if some of the recent public award prizes, like Open Phil's cause prioritization one and GiveWell's change our minds somewhat fit here. More for Open Phil but I never found or got a sense of how entries were assessed or ranked. What are the crteria? This me doubt how we assess and valued expertise and research rigor. 

I think Joe Carlsmith’s philosophy training added a lot to the work in that he was able to more carefully specify the assumptions and arguments for for power-seeking AI.

This is a skill that philosophers are particularly well-trained for. I agree that AI experience is important too and I would love to see someone with this kind of relevant experience commissioned to write a follow-up report. But one reason why this would be much more likely to go well is that Joe Carlsmith has helped to make the terms of the discussion clearer.

Being able to communicate across fields is one of the most important skills one can possess. Some professional roles are essentially just this, E.g. product manager in software. I don’t believe the best academics in a field are necessarily the best people to write a report that is designed to be accessible. The best maths teacher I had at school was not even a mathematician.

Curated and popular this week
Relevant opportunities