Working on a Ph.D. in Public Policy at Oxford. Previously director of strategic research and partnerships at CHAI at Berkeley, project manager and policy researcher at The Future Society in France, and UN youth delegate in climate negotiations.
Congrats on launching cFactual; it sounds great!
Exploring how you can help launch small or mega projects could also be interesting. If we expect this century or decade to be "wild", the EA community will create many new organizations and projects to deal with new challenges. It would be great to help these projects have a solid ToC, governance structure, etc., from the beginning. I understand that these projects may be on a slightly longer timeline (e.g. "the first year of the creation of a new AI governance organization...") but it could be great. I'd personally feel more confident about launching a new large project if I had cFactual to help!
(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn't super practical.
I agree with Adam here about the fact that it's better to host all attendees in one place during retreats.
However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.
Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it.
Very excited about this competition! Is it still happening?
In this case, it seems like a very good strategy for the world, too, in that it doesn't politicize one issue too much (like climate change has been in the US because it was tied to Democrats instead of both sides of the aisle).
+ 1 for way more investigations and background checks for major donations, megaprojects, and association with EA.
I agree that the tone was too tribalistic, but the content is correct.
(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that's useful for you! The comments definitely changed my views - negatively - about the utility of Leverage's outputs and some cultural issues.)
This post is beautiful, rational, and useful - thank you!
As the beginning of a reply to the question "What does a “realistic best case transition to transformative AI” look like?", we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.