NB

Nathan_Barnard

687 karmaJoined Nov 2019

Bio

Blog at The Good Blog https://thegoodblog.substack.com/

Comments
69

My views here are just deferring to gender scholars I respect. 

Yes I agree with this - but if this is part of the theory of change then Athena should probably privilege applicants with these different backgrounds and I don't know if they intend to do this.  

I'm sceptical that there are substantial benefits to generating AI safety research ideas from gender diversity. I haven't read the literature here, but my prior on these types of interventions is that the effect size is small. 

I regardless think Athena is good for the same reasons Lewis put forward in his comment - the evidence that women are excluded from male-dominated work environments seems strong and it's very important that we get as many talented researchers into AI safety as possible. This also seems especially like a problem in the AIS community where anecdotal claims of difficulties from unwanted romantic/sexual advances are common. 

I think the intellectual benefits from gender diversity claims haven't been subjected to sufficient scrutiny because it's convenient to believe. For this kind of claim, I would need to see high-quality causal inference research to believe it and I haven't seen this research and the article linked doesn't cite such research. The linked NatGeo article doesn't seem to me to bring relevant evidence to bear on the question. I completely buy that having more women in the life sciences leads to better medical treatment for women, but that causal mechanism at work here doesn't seem like it would apply to AI safety research. 

I think I just don't have sufficiently precise models to know whether it's more valuable for people to do implementation or strategy work on the current margin. 

I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we've also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done. 

On the other hand, I think there's still a lot of strategic ambiguity and for lots of the most important strategy questions there's like one report with massive uncertainty that's been done. For instance, both bioanchors and Davidson's takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that's modelled it at the firm level (although there's another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.  

I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they're grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great. 

In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety. 

I think I mostly disagree with this post. 

I think Michael Webb would be an example of someone who did pretty abstract stuff (are ideas, in general, getting harder to find) at a relatively junior level (PhD student) but then because his work was impressive and rigorous became very senior in the British government and DeepMind. 

Tamay Besiroglu's MPhil thesis on ideas getting harder to find in ML I think should be counted as strategy research by a junior person but has been important in feeding into various Epoch papers and the Davidson takeoff speeds model.  I expect the Epoch papers and the takeoff model to be very impactful. Tamay is now deputy director of Epoch. 

My guess is that it's easier to do bad strategy research than it is to get really good at a niche but important things, but I think it's very plausible that it's the better-expected value decision provided you can make your strategy research legibly impressive, and your strategy research is good research.  It seems plausible that doing independent strategy research one isn't aiming to be published in a journal is particularly bad since it doesn't provide good career capital, there isn't good mentorship or feedback and there's no clear path to impact.  

I would guess that economists are unusually well-suited to strategy research because it can often be published in journals which is legibly impressive and so is good career capital, and the type of economics strategy research that one does is either empirical and so has good feedback loops, or is model-based but drawn from economic theory and so much more structured than typical theory would be. I think this latter type of research can clearly be impactful - for instance, Racing to the Precipice is a pure game theory paper but informs much of the conversation of avoiding race dynamics. Economics is also generally respected within government and economists are often hired as economists which is unusual amongst the social sciences. 

My training is as an economist and it's plausible to me that work in political science, law, and political philosophy would also be influential but I have less knowledge of these areas. 

I don't want to overemphasise my disagreement - I think lots of people should become experts in very specific things - but I think this post is mostly an argument against doing bad strategy research that doesn't gain career capital. I expect doing strategy research at an organization that is experienced at doing good and/or legibly impressive research e.g. in academia mostly solves this problem. 

A final point, I think this post underrates the long-run influence of ideas on government and policy. The neoliberals of the 60s and 70s are a well-known example of this, but also Jane Jacob's influence on US urban planning, the influence of legal and literary theory on the Modern American left, and the importance of Silent Spring for the US environmental movement. Research in information economics has been important in designing healthcare systems, e.g. the mandate to buy healthcare under the ACA. The EUs cap and trade scheme is another idea that came quite directly from pretty abstract research. 

This is a different kind of influence to proposing a specific or implementing a specific policy in government - which is a very important kind of influence - but I suspect over the long run is more important (though with weak confidence and I don't think this is especially cruxy.)

I strongly disagree with the claim that the connection to EA and doing good is unclear. The EA community's beliefs about AI have been, and continue to be, strongly influenced by Eliezer. It's very pertinent if Eliezer is systematically wrong and overconfident about being wrong because, insofar as there's some level of defferal to Elizer on AI questions within the EA community which I think there clearly is, it implies that most EAs should reduce their credence in Elizer's AI views. 

Load more