CaroJ

PhD researcher in Public Policy @ University of Oxford
Pursuing a doctoral degree (e.g. PhD)
921Oxford, UKJoined Jan 2018

Bio

Working on a Ph.D. in Public Policy at Oxford. Previously director of strategic research and partnerships at CHAI at Berkeley, project manager and policy researcher at The Future Society in France, and UN youth delegate in climate negotiations.

Comments
95

Topic Contributions
1

This post is beautiful, rational, and useful - thank you!

As the beginning of a reply to the question "What does a “realistic best case transition to transformative AI” look like?", we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.

Congrats on launching cFactual; it sounds great!

Exploring how you can help launch small or mega projects could also be interesting. If we expect this century or decade to be "wild", the EA community will create many new organizations and projects to deal with new challenges.  It would be great to help these projects have a solid ToC, governance structure, etc., from the beginning. I understand that these projects may be on a slightly longer timeline (e.g. "the first year of the creation of a new AI governance organization...") but it could be great. I'd personally feel more confident about launching a new large project if I had cFactual to help!

(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn't super practical.

I agree with Adam here about the fact that it's better to host all attendees in one place during retreats.

However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.

Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it.

Very excited about this competition! Is it still happening?

In this case, it seems like a very good strategy for the world, too, in that it doesn't politicize one issue too much (like climate change has been in the US because it was tied to Democrats instead of both sides of the aisle).

Answer by CaroJNov 14, 202260

More opportunities:

  • The AI Safety Microgrant Round: "We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD"; "We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential."; "Fill out the form at Microgrant.ai by December 1, 2022."
  • Nonlinear Emergency Funding: "Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP." 

+ 1 for way more investigations and background checks for major donations, megaprojects, and association with EA.

I agree that the tone was too tribalistic, but the content is correct.

(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that's useful for you! The comments definitely changed my views - negatively - about the utility of Leverage's outputs and some cultural issues.)

Load More