HaydnBelfield

2875Joined Oct 2014

Bio

Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge's Centre for the Study of Existential Risk since Jan 2017.

Comments
205

I wanted to post to say I agree, EAGxLatinAmerica was wonderful - so exciting to meet such interesting people doing great work!

Also...

  • “I met Rob Wiblin at lunch and I didn't recognize him."

Ha rekt

There's a related tension that's been recurrent in the UK Labour Party between two normative visions of its internal organisation. 

In the first, the Party is intended to win elections, and then democratise power through eg devolution, reducing inequality and poverty etc, and its internal organisation (probably quite leader-led) should be whatever helps that goal. The second vision is that the Party organisation should try to model the society it wants, and be more responsive to the wishes/votes of individual members. See e.g. the Campaign for Labour Party Democracy. Tensions between these views have erupted every 30 years or so (the 2010s & Corbyn, 1980s & Benn/Foot, 1950s & Bevanites, 1930s & Lansbury).

I believe there are similar tensions within many other political parties around the world (eg the Democrats and primaries) and within different social movements - over whether they should be run by members/votes or more by leaders who then redistribute resources/power/etc.

This is all to say, its unsurprising that this recurrent tension in many progressive movements and parties is also present in the effective altruism community. I find that quite reassuring.

I appreciate this quick and clear statement from CEA.

We came up with our rankings seperately, but when we compared it turned out we agreed on the top 4 + honourable mention. We then worked on the texts together. 

I think the crucial thing is funding levels. 

It was only by October 1941 (after substantial nudging from the British) that Roosevelt approved serious funding. As a reminder, I'm particularly interested in ‘sprint’ projects with substantial funding: for example those in which the peak year funding reached 0.4% of GDP (Stine, 2009, see also Grace, 2015).

So to some extent they were in a race 1939-1942, but I would suggest it wasn't particularly intense, it wasn't a sprint race.

These tragic tradeoffs also worry me deeply. Existential wagering is to me one of the more worrying, but also possible to avoid. However, the tradeoff between existential co-option and blackmail seems particularly hard to avoid for AI.

I think my point is more like "if anyone gets anywhere near advanced AI, governments will have something to say about it - they will be a central player in shaping its development and deployment." It seems very unlikely to me that governments would not notice or do anything about such a potentially transformative technology. It seems very unlikely to me that a company could train and deploy an advanced AI system of the kind you're thinking about without governments regulating and directing it. On funding specifically, I would probably be >50% on governments getting involved in meaningful private-public collaboration if we get closer to substantial leaps in capabilities (though it seems unlikely to me that AI progress will get to that point by 2030).

On your regulation question, I'd note that the EU AI Act, likely to pass next year,  already proposes the following requirements applying to companies providing (eg selling, licensing or selling access to) 'general purpose AI systems' (eg large foundation models):

  • Risk Management System
  • Data and data governance
  • Technical documentation 
  • Record-keeping
  • Transparency and provision of information to users
  • Human oversight
  • Accuracy, robustness and cybersecurity

So they'll already have to do (post-training) safety testing before deployment. Regulating the training of these models is different and harder, but even that seems plausible to me at some point, if the training runs become ever huger and potentially more consequential. Consider the analogy that we regulate biological experiments.

Strongly agree, upvoted.

Just a minor point on the Putin quote, as it comes up so often, he was talking to a bunch of schoolkids, encouraging them to do science and technology. He said similarly supportive things about a bunch of other technologies. I'm at >90% he wasn't referring to AGI. He's not even that committed to AI leadership: he's taken few actions indicating serious interest in 'leading in AI'. Indeed, his Ukraine invasion has cut off most of his chip supplies and led to a huge exodus of AI/CS talent. It was just an off-the-cuff rhetorical remark.

This is a really useful and interesting post that I'm glad you've written! I agree with a lot of it, but I'll mention one bit I'm less sure about.

I think we can have more nuance about governments "being in the race" or their "policy having strong effects". I agree that pre-2030, a large, centralised, government-run development programme like the Apollo Project is less likely (I assume this is the central thing you have in your mind). However there are other ways governments could be involved, including funding, regulating and 'directing' development and deployment.

I think cyber weapons and cyber defence is a useful comparison. Much of the development - and even deployment - is led by the private sector: defence contractors in the US, criminals in some other states. Nevertheless, much of it is funded, regulated and directed by states. People didn't think this would happen in the late 1990s and 2000s - they thought it would be private sector led. But nevertheless with cyber, we're now in a situation where the major states (e.g. those in the P5, with big economies, militaries and nuclear weapons) have the preponderance of cyber power - they have directed and are responsible for all the largest cyber attacks (Stuxnet, 2016 espionage, NotPetya, WannaCry etc). It's a public-private partnership, but states are in the driving seat.

Something similar might happen with AI this side of 2030, without the situation resembling the Apollo Project.

For much more on this, Jade Leung's thesis is great: Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies

Load More