Oliver Sourbut

PhD student (AI) @ University of Oxford
238 karmaJoined Sep 2020Pursuing a doctoral degree (e.g. PhD)Working (6-15 years)Oxford, UK
www.lesswrong.com/users/oliver-sourbut

Bio

Participation
4

Call me Oliver or Oly - I don't mind which.

I'm particularly interested in sustainable collaboration and the long-term future of value. I'd love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.

I'm currently (2022) embarking on a PhD in AI in Oxford, and also spend time in (or in easy reach of) London. Until recently I was working as a senior data scientist and software engineer, and doing occasional AI alignment research with SERI.

I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read - let me know your suggestions! In no particular order, here are some I've enjoyed recently

  • Ord - The Precipice
  • Pearl - The Book of Why
  • Bostrom - Superintelligence
  • McCall Smith - The No. 1 Ladies' Detective Agency (and series)
  • Melville - Moby-Dick
  • Abelson & Sussman - Structure and Interpretation of Computer Programs
  • Stross - Accelerando
  • Graeme - The Rosie Project (and trilogy)

Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites

  • Hanabi (can't recommend enough; try it out!)
  • Pandemic (ironic at time of writing...)
  • Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
  • Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)

People who've got to know me only recently are sometimes surprised to learn that I'm a pretty handy trumpeter and hornist.

Comments
48

I think that the best work on AI alignment happens at the AGI labs

Based on your other discussion e.g. about public pressure on labs, it seems like this might be a (minor?) loadbearing belief?

I appreciate that you qualify this further in a footnote

This is a controversial view, but I’d guess it’s a majority opinion amongst AI alignment researchers.

I just wanted to call out that I weakly hold the opposite position, and also opposite best guess on majority opinion (based on safety researchers I know). Naturally there are sampling effects!

This is a marginal sentiment, and I certainly wouldn't trade all lab researchers for non-lab researchers or vice versa. Diversification of research settings seems quite precious, and the dialogue is important to preserve.

I also question

Reasons include: access to the best alignment talent,

because a lot of us are very reluctant to join AGI labs, for obvious reasons! I know folks inside and outside of AGI labs, and it seems to me that the most talented are among the outsiders (but this also definitely could be an artefact of sample sizes).

This is an exemplary and welcome response: concise, full-throated, actioned. Respect, thank you Aidan.

Sincerely, I hope my feedback was all-considered good from your perspective. As I noted in this post, I felt my initial email was slightly unkind at one point, but I am overall glad I shared it - you appreciate my getting exercised about this, even over a few paragraphs!

It’s important to discuss national AI policies which are often explicitly motivated by goals of competition without legitimizing or justifying zero-sum competitive mindsets which can undermine efforts to cooperate.

Yes, and I repeat that CAIS newsletter has a good balance of nuance, correctness, helpfulness, reach. Hopefully your example here sets the tone for conversations in this space!

(Prefaced with the understanding that your comment is to some extent devil's advocating and this response may be too)

both the US and Chinese governments have the potential to step in when corporations in their country get too powerful

What is 'step in'? I think when people are describing things in aggregated national terms without nuance, they're implicitly imagining govts either already directing, or soon/inevitably appropriating and directing (perhaps to aggressive national interest plays). But govts could just as readily regulate and provide guidance on underprovisioned dimensions (like safety and existential risk mitigation). Or they could in fact be powerless, or remain basically passive until too late, or... (all live possibilities to me).

In these alternative cases, the kind of language and thinking I'm highlighting in the post seems like a sort of nonsense to me - like it doesn't really parse unless you tacitly assume some foregone conclusions.

Thanks Ben!

Please don't take these as endorsements that this thinking is correct, just that it's what I see when I inspect my instincts about this

Appreciated.

These psychological (and real) factors seem very plausible to me for explaining why mistakes in thinking and communication are made.

maybe we can think of the US companies as simultaneously closer friends and closer enemies with each other?

Mhm, this seems less lossy as a hypothetical model. Even if they were only 'closer friends', though, I don't think it's at all clearcut enough for it to be appropriate to glom them (and with the govt!) when thinking about strategy. And the more so when tempered by 'closer enemies'. As in, I expect anyone doing that to systematically be (harmfully) wrong in their thinking and writing.


I understand what you're gesturing at regarding anticipation that US actors might associate more with other US than with Chinese actors. I don't know what to think here but it seems far from set in stone.

Some personal anecdata. I worked in a growing internet company for some years. One of the big talking points was doing business in China, which involved making deals with Chinese entities. I wasn't directly involved but I want to say it was... somewhat hard but not prohibitive? We ended up with offices in Shanghai, some employees there, and some folks who travelled back and forth sometimes.[1] I tentatively think we did more business with China-based entities than with US-based market-competitors. I confidently know we did more business with non-US-based entities than with US-based market-competitors.

Meanwhile and less anecdotally, the stories about smuggling and rules-lawyering sales under the US govt's limit are literally examples of US- and China- based actors colluding! It's beyond sloppy to summarise that by drawing boundaries around 'US' and 'China'.

I could of course find examples which reinforce the 'intra-bloc harmony' hypothesis. Point is that it seems far from settled, so resting on implicit assumptions here will predictably lead to errors.


  1. As a tongue-in-cheek aside, shockingly, Chinese colleagues I've had in industry and academia are not weird aliens with dangerous values (at least not more than usual). Anyone who reasons on bases like these has basically failed (in a very human and understandable way) to reason at all, as far as I'm concerned. Most of the weird aliens with dangerous values I've met have been Americans and Brits! (There is obviously an egregious sampling bias.) Reasoning on the basis that others will reason like this is entirely valid, unfortunately. ↩︎

Just in case we're out of sync, let's briefly refocus on some object details

China has made several efforts to preserve their chip access, including smuggling, buying chips that are just under the legal limit of performance, and investing in their domestic chip industry.

Are you aware of the following?

  • the smuggling was done by... smugglers
  • the buying of chips under the limit was done by multiple suppliers in China
  • the selling of chips under the limit was done by Nvidia (and perhaps others)
  • the investment in China's chip industry was done by the CCP

If not, please digest those nuances (and perhaps I need to make them clearer in my OP!) and consider why I object to the phrasing.

You said,

If ground truth reality is you're in a race to the nuke, dressing up reality in language that denies this is counterproductive.

This is true only if you have sufficient justification to believe confidently in that particular 'ground truth reality', and if the cost of speaking with nuance outweighs the expected cost of inflaming tensions in worlds where you're wrong.

To be clear, I have wide uncertainty on 'ground truth' here. From that POV, '[People and organisations in] China [have has] made several efforts...' is the 'clear and honest' version, while coarse and lossy speech like 'China has made several efforts...' is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high, which I admit is what gets me exercised about this particular lack of nuance and not so much about others (though also others).

What about you? (I note we're discussing possible geopolitical futures, right? I don't think humans can be justifiably very confident about questions like this. I object to the use of 'ground truth' here on that basis[1].)

I'm still interested in whether you think those questions I previously gestured at are cruxes, and whether my attempted ITT was about right. I don't think there is a 'MIRI's take' in this context.


Did you see my section in the OP about excludability of harms as follows?

Separately, a lack of reliable alignment techniques and performance guarantees makes AI-powered belligerent national interest plays look more like bioweapons than like nukes - i.e. minimally-excludable - and perhaps mutually-knowably so! This presently damps the incentive to go after them.

I wrote 'perhaps mutually-knowably so' anticipating this kind of 'ooh AI big stick' thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it's not like a nuke and more like a bioweapon?

Do you think humanity is sort of doing middling OK on bio? (i.e. not foregone conclusion biowarfare/disasters?) What about climate? Nukes? Clearly we're doing quite badly but I don't think the course of the future is set in stone[1:1] for any of these.


Overall it appears that you're very (I would say over) confident in this picture. To the extent that you take issue with my asking for nuance (of the kind that takes claims from false unless contorted with caveats to basically true). Perhaps on the basis that what we perceive now (lots of actors of various sizes competing and cooperating on various axes including access to compute) is actually a shadow of what's unavoidably to come (all-out superpower strife in a race to AGI) and in the latter world the finer distinctions don't matter?


  1. I don't care if you are a physical determinist, we're finite, tiny computers in a messy world. There might be some 'ground truth' about what the future holds, but from our POV it's stochastic. ↩︎ ↩︎

Interesting. I'd love to know if you think the crux schema I outlined is indeed important? I mean this:

How quickly/totally/coherently could US gov/CCP capture AI talent/artefacts/compute within its jurisdiction and redirect them toward excludable destructive ends? Under what circumstances would they want/be able to do that?

Correct me at any point if I misinterpret: I read that, on the basis of answers to something a bit like these, you think an international competition/race is all but inevitable? Presumably that registers as terrifically dangerous for you such that mitigating it would be a high priority if tractable? But you deny the tractability of mitigating it so consider concerns like mine regarding clear use of language to be... wasteful? Distracting? Counterproductive?


Your alternative history with fission is helpful and thought-provoking - and plausible. I don't think it's the inevitable way things would play out, though. For example, if concerns about atmospheric ignition, nuclear winter, and other risks were raised in a climate of less international distrust it's at least plausible to me that coordination to avoid those risks could have been achieved. (Of course, with the benefit of hindsight we know that atmospheric ignition was not a risk, but we still don't know about nuclear winter.)

Are we in a climate of less international distrust than they? I think so, at least a little. Careless talk can inflame escalation, so this variable really matters not only as an input to our actions but an output.

You know nations used to be way smaller, right. Why do you think they are so large now?

I have a passable grasp of world history and prehistory (though I will probably always lament my lack of knowledge). Do you remember the international trading companies in the age of sail? The age of European empires? They're gone, now. Possible counterpoints to part of the worldview you're exposing.

Thanks for this thoughtful response!

this tendency leads to analysis that assumes more coordination among governments, companies, and individuals in other countries than is warranted. When people talk about "the US" taking some action... more likely to be aware of the nuance this ignores... less likely to consider such nuances when people talk about "China" doing something

This seems exactly right and is what I'm frustrated by. Though, further than you give credit (or un-credit) for, frequently I come across writing or talking about "US success in AI", "US leading in AI", or "China catching up to US", etc. which are all almost nonsense as far as I'm concerned. What do those statements even mean? In good faith I hope for someone to describe what these sorts of claims mean in a way which clicks for me, but I have come to expect that there is probably none.

Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point? Are they referring to integration of contemporary ML/DS into the economy? The military? Or impacts on other indicators[1]? What do people mean by "China" here: CCP, Alibaba, Tencent, ...? If people mean these things, they should say those things, or otherwise say what they do mean. Otherwise I think people motte-and-bailey themselves (and others) into some really strange understandings. There's not some linear scoreboard which "US" and "China" have points on but people behave/talk like they actually think in those terms.

your claim that governments don't influence AI development [via semiconductor progress] is too strong

Thanks, this would indeed be too strong :) but it's not what I mean. (Also thank you for the example bullets below that, for me and for other readers.)

I don't mean to imply they have no influence on AI development and deployment[2]. What I meant by 'not currently meaningful players in AI development and deployment' was that, to date, governments have had little to no say in the course or nature of AI development. Rather, they have been mostly passive or unwitting passengers, with recent interventions (to date) comprising coarse economy-level lever-pulls, like your examples of regulation on chip production and sales. Can you think of a better compression of this than what I wrote? 'currently mainly passive except for coarse interventions at the economy-level'?

early demand for semiconductors was driven by the US government's military and space program

The key difference between e.g. space-race or nuclear/ICBM etc. and AI is that in those cases, governments were appropriately thought of as somewhat-coherently instigating, steering and directing, and could be described as being key players with a real competition between them. With AI, none of those things are (currently) true. So ideally we would use different language to describe the different situations (especially because the misleading use of language is inflammatory).

I get exercised about this overall issue because on one model, this sort of failure of imagination and the confusion it gives rise to is exactly what leads to escalation and conflict, which I sense you agree on. We do not want sloppy foregone-conclusion thinking leading to WWIII with AI and nukes.


  1. What indicators? Education, unemployment, privacy, health, productivity, democracy, inequality, ...? ↩︎

  2. Ironically for a piece on bringing clarity through nuance, I evidently wasn't clear enough about where I was drawing the boundaries in my initial post... ↩︎

Great read, and interesting analysis. I like encountering models for complex systems (like community dynamics)!

One factor I don't think was discussed (maybe the gesture at possible inadequacy of encompasses this) is the duration of scandal effects. E.g. imagine some group claiming to be the Spanish Inquisition or the Mongol Horde, or the Illuminati tried to get stuff done. I think (assuming taken seriously) they'd encounter lingering reputational damage more than one year after the original scandals! Not sure how this models out; I'm not planning to dive into it, but I think this stands out to me as the 'next marginal fidelity gain' for a model like this.

OpenAI as a whole, and individuals affiliated with or speaking for the org, appear to be largely behaving as if they are caught in an overdetermined race toward AGI.

What proportion of people at OpenAI believe this, and to what extent? What kind of observations, or actions or statements by others (and who?) would change their minds?

Load more