JB

John Bridge

171 karmaJoined Oct 2021London, UK

Bio

Pronouns: he/him

Leave me anonymous feedback: https://docs.google.com/forms/d/e/1FAIpQLScB5R4UAnW_k6LiYnFWHHBncs4w1zsfpjgeRGGvNbm-266X4w/viewform

Contact me at: johnmichaelbridge[at]gmail[dot]com

Epistemic status: Uncertain and speculative. I try not to caveat my claims too much because it makes everything harder to read. If I've worded something too strongly, feel free to ask for clarification.

Sequences
1

Towards a Worldwide, Wateright Windfall Clause

Comments
36

One of the reasons I no longer donate to EA Funds so often is that I think their funds lack a clearly stated theory of change.

For example, with the Global Health and Development fund, I’m confused why EAF hasn’t updated at all in favour of growth-promoting systemic change like liberal market reforms. It seems like there is strong evidence that economic growth is a key driver of welfare, but the fund hasn’t explained publicly why it prefers one-shot health interventions like bednets. It may well have good reasons for this, but there is absolutely no literature explaining the fund’s position.

The LTFF has a similar problem, insofar as it largely funds researchers doing obscure AI Safety work. Nowhere does the fund openly state: “we believe one of the most effective ways to promote long term human flourishing is to support high quality academic research in the field of AI Safety, both for the purposes of sustainable field-building and in order to increase our knowledge of how to make sure increasingly advanced AI systems are safe and beneficial to humanity.” Instead, donors are basically left to infer this theory of change from the grants themselves.

I don’t think we can expect to drastically increase the take-up of funds without this sort of transparency. I’m sure the fund managers have thought about this privately, and that they have justifications for not making their thoughts public, but asking people to pour thousands of pounds/dollars a year into a black box is a very, very big ask.

For some reason the Forum isn't letting me update the post directly, so I want to highlight another core assumption which I didn't make explicit in the original post. Once the Forum starts working again I'll slot this into the post itself.

Core Assumption 2 - the Rule of Law holds together:

Later in the sequence, I'm planning to consider how a deterioration in the Rule of Law following the development of WGAI might impact the viability of the Clause. This could vary considerably by jurisdiction. For example, English constitutional law allows the legislature to break its own rules[1] if it wants to, giving Britain a unique ability amongst potential Developer host states to render the Clause inert simply by legislating that the Agreement was unlawful, or that the Developer's assets are property of the Crown. 

For the moment, however, I am assuming that the Rule of Law will be largely untouched by the development of WGAI. I am doing this because it is important to explore how things might play out in a best-case scenario, where all of the relevant actors decide to play by the book. The conclusions in my post can then inform a broader analysis of the viability of the Clause in scenarios where actors' behaviour is further from the ideal.

  1. ^

    CTRL+F 'parliament had the power to make any law except any law that bound its successors' to see Wikipedia's summary of this topic.

~80% of the applications are speculative, from people outside the EA community and don’t even really understand what we do...

 

Out of interest - do you folks tend to hire outside the EA community? And how much does EA involved-ness affect your evaluation of applications?

I ask as I know some really smart and talented people working on development outside of EA who could be great founders, and I'd like to know if it's worth encouraging them to apply.

I should clarify - by pocketing stuff, it ends up in the automatic queue for things to read. That way, I don't really have to think about what to read next, and the things I want to read just pop up anyway.

Not sure if you've already tried it, but I find Pocket and Audible really helps with this. It means I can just pop it on my headphones whenever I'm walking anywhere without needing to sit down and decide to read it.

Cuts back on the activation energy, which in turn increases how much I 'read'.

Looking for an accountability buddy:

I’m working on some EA-relevant research right now, but I’m finding it hard to stay motivated, so I’m looking for an accountability buddy.

My thought is that we could set ~4hrs a week where we commit to call and work on our respective projects, though I’m happy to be flexible on the amount of time.

If you’re interested, please reach out in the comments or DM me.

NB: One reason this might be tractable is that lots of non-EA folks are working on data protection already, and we could leverage their expertise.

Focusing more on data governance:

GovAI now has a full-time researcher working on compute governance. Chinchilla's Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.

What's going on here? I can see two possible answers:

  • Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and we're better off putting our time into compute.
  • Someone might already be working on this that I just haven't heard of.

If anyone has an answer to this I'd love to know!

No Plans for Misaligned AI:

This talk by Jade Leung got me thinking - I've never seen a plan for what we do if AGI turns out misaligned. 

The default assumption seems to be something like "well, there's no point planning for that, because we'll all be powerless and screwed". This seems mistaken to me. It's not clear that we'll be so powerless that we have absolutely no ability to encourage a trajectory change, particularly in a slow takeoff scenario. Given that most people weight alleviating suffering higher than promoting pleasure, this is especially valuable work in expectation as it might help us change outcomes from 'very, very bad world' to 'slightly negative' world. This also seems pretty tractable - I'd expect ~10hrs thinking about this could help us come up with a very barebones playbook.

Why isn't this being done? I think there are a few reasons:

  • Like suffering focused ethics, it's depressing.
  • It seems particularly speculative - most of the 'humanity becomes disempowered by AGI' scenarios look pretty sci-fi. So serious academics don't want to consider it.
  • People assume, mistakenly IMO, that we're just totally screwed if AI is misaligned.

Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:

  • Changes to hard law are difficult to reverse - legislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer. 
  • At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances. 
  • Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
  • This is worsened by the fact that we don’t know what ideal longtermist governance looks like. In a world of transformative AI, it’s hard to tell if the rule of law will mean very much at all. If sovereign states aren’t powerful enough to act as leviathans, it’s hard to see why influential actors wouldn’t just revert to power politics.

Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.

I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.

Load more