NB

Nathan_Barnard

628 karmaJoined Nov 2019

Bio

Blog at The Good Blog https://thegoodblog.substack.com/

Comments
62

I think I just don't have sufficiently precise models to know whether it's more valuable for people to do implementation or strategy work on the current margin. 

I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we've also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done. 

On the other hand, I think there's still a lot of strategic ambiguity and for lots of the most important strategy questions there's like one report with massive uncertainty that's been done. For instance, both bioanchors and Davidson's takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that's modelled it at the firm level (although there's another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.  

I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they're grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great. 

In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety. 

I think I mostly disagree with this post. 

I think Michael Webb would be an example of someone who did pretty abstract stuff (are ideas, in general, getting harder to find) at a relatively junior level (PhD student) but then because his work was impressive and rigorous became very senior in the British government and DeepMind. 

Tamay Besiroglu's MPhil thesis on ideas getting harder to find in ML I think should be counted as strategy research by a junior person but has been important in feeding into various Epoch papers and the Davidson takeoff speeds model.  I expect the Epoch papers and the takeoff model to be very impactful. Tamay is now deputy director of Epoch. 

My guess is that it's easier to do bad strategy research than it is to get really good at a niche but important things, but I think it's very plausible that it's the better-expected value decision provided you can make your strategy research legibly impressive, and your strategy research is good research.  It seems plausible that doing independent strategy research one isn't aiming to be published in a journal is particularly bad since it doesn't provide good career capital, there isn't good mentorship or feedback and there's no clear path to impact.  

I would guess that economists are unusually well-suited to strategy research because it can often be published in journals which is legibly impressive and so is good career capital, and the type of economics strategy research that one does is either empirical and so has good feedback loops, or is model-based but drawn from economic theory and so much more structured than typical theory would be. I think this latter type of research can clearly be impactful - for instance, Racing to the Precipice is a pure game theory paper but informs much of the conversation of avoiding race dynamics. Economics is also generally respected within government and economists are often hired as economists which is unusual amongst the social sciences. 

My training is as an economist and it's plausible to me that work in political science, law, and political philosophy would also be influential but I have less knowledge of these areas. 

I don't want to overemphasise my disagreement - I think lots of people should become experts in very specific things - but I think this post is mostly an argument against doing bad strategy research that doesn't gain career capital. I expect doing strategy research at an organization that is experienced at doing good and/or legibly impressive research e.g. in academia mostly solves this problem. 

A final point, I think this post underrates the long-run influence of ideas on government and policy. The neoliberals of the 60s and 70s are a well-known example of this, but also Jane Jacob's influence on US urban planning, the influence of legal and literary theory on the Modern American left, and the importance of Silent Spring for the US environmental movement. Research in information economics has been important in designing healthcare systems, e.g. the mandate to buy healthcare under the ACA. The EUs cap and trade scheme is another idea that came quite directly from pretty abstract research. 

This is a different kind of influence to proposing a specific or implementing a specific policy in government - which is a very important kind of influence - but I suspect over the long run is more important (though with weak confidence and I don't think this is especially cruxy.)

I strongly disagree with the claim that the connection to EA and doing good is unclear. The EA community's beliefs about AI have been, and continue to be, strongly influenced by Eliezer. It's very pertinent if Eliezer is systematically wrong and overconfident about being wrong because, insofar as there's some level of defferal to Elizer on AI questions within the EA community which I think there clearly is, it implies that most EAs should reduce their credence in Elizer's AI views. 

I agree ideally one would do gut stuff right both practically and epistemically. In my case, the tradeoff of productivity loss and loss in general reasoning ability in exchange for some epistemic gains wasn't worth it.

 I think it's plausible that for people in a similar situation to me - people who are good at making decisions based on just analytic reasoning and have reason to think that they might be vulnerable if they were to try to believe things on a gut level as well as an analytic one - should consider not engaging certain EA topics on a gut level (I don't restrict this to AI safety - I know people who've had similar reactions thinking about nuclear risk and I've personally made the decision not to think about s-risk or animal welfare on a gut level either.)

I do want to emphasise that there was a tradeoff here - I think I have somewhat better AI safety takes as a result of thinking about AI safety on a gut level. The benefit though was reasonably small and not worth the other costs from an impartial welfareist perspective. 

To be clear, I'm not at all recommending changing one's beliefs here. My language of gut belief vs cognitive beliefs was probably too imprecise. I'm recommending that, for some people, particularly if one is able to act on beliefs one doesn't intuitively feel, it's better not to try to intuitively feel those beliefs. 

For some people, this may come at a cost to their ability to form true beliefs,  and this is a difficult tradeoff. For me, I think, all things considered, intuiting beliefs has made me worse at forming true beliefs. 

I did the summer fellowship last year and found it extremely useful in getting research experience,  having space to think about x-risk questions with others who were also interested in these questions, and making very valuable connections. I also found the fellowship very enjoyable. 

My experience with Atlas fellows (although there was substantial selection bias involved here) is that they're extremely  high calibre.

I also think there's quite a lot of friction in getting LTFF funding  - it takes quite a long time to come through I think is the main one. I think there are quite large benefits to being able to unilaterally decide to do some project and having the funding immediately available to do it. 

Yeah this seems right.

I think I don't understand the point you're making with your last sentence. 

Load more