Pronouns: he/him
Leave me anonymous feedback: https://docs.google.com/forms/d/e/1FAIpQLScB5R4UAnW_k6LiYnFWHHBncs4w1zsfpjgeRGGvNbm-266X4w/viewform
Contact me at: johnmichaelbridge[at]gmail[dot]com
Epistemic status: Uncertain and speculative. I try not to caveat my claims too much because it makes everything harder to read. If I've worded something too strongly, feel free to ask for clarification.
For some reason the Forum isn't letting me update the post directly, so I want to highlight another core assumption which I didn't make explicit in the original post. Once the Forum starts working again I'll slot this into the post itself.
Later in the sequence, I'm planning to consider how a deterioration in the Rule of Law following the development of WGAI might impact the viability of the Clause. This could vary considerably by jurisdiction. For example, English constitutional law allows the legislature to break its own rules[1] if it wants to, giving Britain a unique ability amongst potential Developer host states to render the Clause inert simply by legislating that the Agreement was unlawful, or that the Developer's assets are property of the Crown.
For the moment, however, I am assuming that the Rule of Law will be largely untouched by the development of WGAI. I am doing this because it is important to explore how things might play out in a best-case scenario, where all of the relevant actors decide to play by the book. The conclusions in my post can then inform a broader analysis of the viability of the Clause in scenarios where actors' behaviour is further from the ideal.
CTRL+F 'parliament had the power to make any law except any law that bound its successors' to see Wikipedia's summary of this topic.
~80% of the applications are speculative, from people outside the EA community and donāt even really understand what we do...
Out of interest - do you folks tend to hire outside the EA community? And how much does EA involved-ness affect your evaluation of applications?
I ask as I know some really smart and talented people working on development outside of EA who could be great founders, and I'd like to know if it's worth encouraging them to apply.
One of the reasons I no longer donate to EA Funds so often is that I think their funds lack a clearly stated theory of change.
For example, with the Global Health and Development fund, Iām confused why EAF hasnāt updated at all in favour of growth-promoting systemic change like liberal market reforms. It seems like there is strong evidence that economic growth is a key driver of welfare, but the fund hasnāt explained publicly why it prefers one-shot health interventions like bednets. It may well have good reasons for this, but there is absolutely no literature explaining the fundās position.
The LTFF has a similar problem, insofar as it largely funds researchers doing obscure AI Safety work. Nowhere does the fund openly state: āwe believe one of the most effective ways to promote long term human flourishing is to support high quality academic research in the field of AI Safety, both for the purposes of sustainable field-building and in order to increase our knowledge of how to make sure increasingly advanced AI systems are safe and beneficial to humanity.ā Instead, donors are basically left to infer this theory of change from the grants themselves.
I donāt think we can expect to drastically increase the take-up of funds without this sort of transparency. Iām sure the fund managers have thought about this privately, and that they have justifications for not making their thoughts public, but asking people to pour thousands of pounds/dollars a year into a black box is a very, very big ask.