Oliver Sourbut

Oliver - or call me Oly: I don't mind which!

Currently based in London, I'm in my early career working as a software engineer ('minoring' as a data scientist). I'm particularly interested in sustainable collaboration and the long-term future of value. I'd love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.

I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read - let me know your suggestions! Recently I've enjoyed

  • Ord - The Precipice
  • Pearl - The Book of Why
  • Bostrom - Superintelligence
  • McCall Smith - The No. 1 Ladies' Detective Agency
  • Abelson & Sussman - Structure and Interpretation of Computer Programs
  • Stross - Accelerando

Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites

  • Hanabi (can't recommend enough; try it out!)
  • Pandemic (ironic at time of writing...)
  • Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
  • Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)

People who've got to know me only recently are sometimes surprised to learn that I'm a pretty handy trumpeter and hornist.

Posts

Sorted by New

Wiki Contributions

Comments

Prioritization Research for Advancing Wisdom and Intelligence

Yes yes, more strength to this where it's tractable and possible backfires are well understood and mitigated/avoided!

One adjacent category which I think is helpful to consider explicitly (I think you have it implicit here) is 'well-informedness', which I motion is distinct from 'intelligence' or 'wisdom'. One could be quite wise and intelligent but crippled or even misdirected if the information available/salient is limited or biased. Perhaps this is countered by an understanding of one's own intellectual and cognitive biases, leading to appropriate ('wise') choices of information-gathering behaviour to act against possible bias? But perhaps there are other levers to push which act on this category effectively.

To the extent that you think long-run trajectories will be influenced by few specific decision-making entities, it could be extremely valuable to identify, and improve the epistemics and general wisdom (and benevolence) of those entities. To the extent that you think long-run trajectories will be influenced by the interactions of many cooperating and competing decision-making entities, it could be more important to improve mechanisms for coordination, especially coordination against activities which destroy value. Well-informedness may be particularly relevant in the latter case.

Ben_Snodin's Shortform

It depends what media type you're talking about (audio, video, display, ...) - $6m/100m is $60CPM ('cost per mille'), which is certainly above the odds for similar 'premium video' advertising, but only by maybe 2-5x. For other media like audio and display the CPMs can be quite a bit lower, and if you're just looking to reach 'someone, somewhere' you can get a bargain via programmatic advertising.

I happen to work for a major demand-side platform in real-time ad buying and I've been wondering if there might be a way to efficiently do good this way. The pricing can be quite nuanced. Haven't done any analysis at this point.

How impactful is free and open source software development?

Hey, let me know if you'd like another reviewer. I'm a medium-experienced senior software engineer whose professional work and side-projects use various proportions of open-source and proprietary software. And I enjoy reviewing/proof-reading :)

Beyond fire alarms: freeing the groupstruck

I appreciated your detailed analysis of the fire alarm situation along with evidence and introspection notes.

I'm not sure if it opens up any action-relevant new hypothesis space, but one feature of the fire alarm situation which I think you did not analyse is that commonly people are concerned also for the welfare of their fellows, especially those who are close by. This makes sense: if you find yourself in a group, even of strangers (and you've reached consensus that you're not fighting each other) it will usually pay off to look out for each other! So perhaps another social feature at play when people fail to leave a smoky room when the group shows no signs of doing so is that they aren't willing to unilaterally secure their own safety unless they know the others are also going to be safe. Though on this hypothesis you might expect to see more 'speaking up' and attempts to convince others to move.

Beyond fire alarms: freeing the groupstruck

This was a great read, thank you - I especially valued the multiple series of illustrating/motivating examples, and the several sections laying out various hypotheses along with evidence/opinion on them.

I sometimes wonder how evolution ended up creating humans who are sometimes nonconformist, when it seems socially costly, but I think a story related to what you've written here makes sense: at least one kind of nonconformity can sometimes shift a group consensus from a fatal misinterpretation to an appropriate and survivable group response (and furthermore, presumably in expectation gain some prestige for the nonconforming maverick(s) who started the shift). So there's maybe some kind of evolutionary stable meta-strategy of 'probability of being conformist or not (maybe context-dependent)'.

EA Survey 2020: How People Get Involved in EA

Thanks for these very helpful insights! I thought the mosaic charts were particularly creative and visually insightful.

I have one minor statistical nit and one related question.

In cases where 'only one significant difference was found' (at a 95% c.i.), it could be worth noting that you have around 20 categories... so on average one spurious significant difference is to be expected! (If the difference is small.)

Also a question about how the significance test was carried out. so for calling a difference significant at 95% it matters whether you a) check if the individual 95% confidence intervals overlap or b) check if the diff'd confidence interval noted above contains 0 (the usual approach). Which approach was used here? I ask because to my eye there might be a few more (weakly) significant results than were mentioned in the text.

What harm could AI safety do?

To the extent that you are concerned about intrinsically-multipolar negative outcomes (that is, failure modes which are limited to multipolar scenarios), AI safety which helps only to narrowly align individual automated services with their owners could help to accelerate such dangers.

Critch recently outlined this sort of concern well.

A classic which I personally consider to be related is Meditations on Moloch

Careers Questions Open Thread

I really appreciate these data points! Actually it's interesting you mention the networking aspect - one of the factors that would push me towards more higher education is the (real or imagined?) networking opportunities. Though I get on very well with most people I work or study with, I'm not an instinctive 'networker' and I think for me, improving that could be a factor with relatively high marginal return.

As for learning practical skills... I'd hope to get some from a higher degree but if that were all I wanted I might indeed stick to Coursera and the like! It's the research aspect I'd really like to explore my fit for.

Trying to negotiate a break with the company had crossed my mind but sounds hard. Thanks for the nudge and anecdata about that possibility. It would be a big win if possible!

I'm really glad to hear that your path has been working out without regret. I hope that continues. :)

Careers Questions Open Thread

I welcome the reinforcement that a) it is indeed a tough call and b) I'm sane and they're good options! Thank you for the encouragement, and the advice.

I remain fuzzy on what shape 'impactful direct work' could take, and I'm not sure to what degree keeping my mind 'open' in that sense is rational (the better to capture emergent opportunities) vs merely comforting (specifying a path is scary and procastinating is emotionally safer)! I acknowledge that my tentative principal goal besides donations, if I continue engineering growth, is indeed working on safety. The MIRI job link is interesting. I'd be pleased and proud to work in that sort of role (though I'm likely to remain in the UK at least for the near future).

Thank you for the suggestion to talk to Richard or others. I've gathered a few accounts from friends I know well who have gone into further degrees in other disciplines, and I expect it would be useful to complement that with evidence from others to help better predict personal fit. I wouldn't know whom to talk to about impact on a long-term engineering track.

Careers Questions Open Thread

As an engineer (software) myself for a few years, I can encourage you that is rewarding, challenging, and in the right position you can have quite a bit of autonomy to drive decision-making and execute on your own vision. Depending on the role and organisation, it can be far from merely technical; the outline you give of the college project sounds exactly like engineering to me!

That said, there are few or no places where engineers are completely unconstrained. But there are routes from engineering into more 'overseeing'-type roles, e.g. architect, tech director, technical project manager. A lot of those people do much better if they have solid engineering experience of their own first.

Some different thoughts on which I have much less or no experience but seem relevant:

  • management consulting. Have you heard of that? I think they solve hard problems and have some room for vision.
  • entrepreneurs obviously have an opportunity to create and oversee a vision. I gather that a lot of the time it helps to have related experience in the relevant industry/field beforehand
Load More