This is a special post for quick takes by John Bridge. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 2:24 PM

The Cannonball Problem:

Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We can’t see our target, we’ve no idea how hard to push the barrel in any one direction, we don't know how long the fuse is, we can’t stop the cannonball once it’s in motion, and we could do some serious damage if we get things wrong.

Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:

  • Changes to hard law are difficult to reverse - legislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer. 
  • At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances. 
  • Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
  • This is worsened by the fact that we don’t know what ideal longtermist governance looks like. In a world of transformative AI, it’s hard to tell if the rule of law will mean very much at all. If sovereign states aren’t powerful enough to act as leviathans, it’s hard to see why influential actors wouldn’t just revert to power politics.

Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.

I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.

I agree. It seems like a highly impactful thing, with a high level of uncertainty. The normal way of reducing uncertainty is to run small trials. My understanding of this concept from the business world is the idea of Fire Bullets, Then Cannonballs. But (as someone with zero technical competence in AI) I suspect that small trials might simply not be feasible.

Focusing more on data governance:

GovAI now has a full-time researcher working on compute governance. Chinchilla's Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.

What's going on here? I can see two possible answers:

  • Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and we're better off putting our time into compute.
  • Someone might already be working on this that I just haven't heard of.

If anyone has an answer to this I'd love to know!

NB: One reason this might be tractable is that lots of non-EA folks are working on data protection already, and we could leverage their expertise.

No Plans for Misaligned AI:

This talk by Jade Leung got me thinking - I've never seen a plan for what we do if AGI turns out misaligned. 

The default assumption seems to be something like "well, there's no point planning for that, because we'll all be powerless and screwed". This seems mistaken to me. It's not clear that we'll be so powerless that we have absolutely no ability to encourage a trajectory change, particularly in a slow takeoff scenario. Given that most people weight alleviating suffering higher than promoting pleasure, this is especially valuable work in expectation as it might help us change outcomes from 'very, very bad world' to 'slightly negative' world. This also seems pretty tractable - I'd expect ~10hrs thinking about this could help us come up with a very barebones playbook.

Why isn't this being done? I think there are a few reasons:

  • Like suffering focused ethics, it's depressing.
  • It seems particularly speculative - most of the 'humanity becomes disempowered by AGI' scenarios look pretty sci-fi. So serious academics don't want to consider it.
  • People assume, mistakenly IMO, that we're just totally screwed if AI is misaligned.

Looking for an accountability buddy:

I’m working on some EA-relevant research right now, but I’m finding it hard to stay motivated, so I’m looking for an accountability buddy.

My thought is that we could set ~4hrs a week where we commit to call and work on our respective projects, though I’m happy to be flexible on the amount of time.

If you’re interested, please reach out in the comments or DM me.

Curated and popular this week
Relevant opportunities