Working on a Ph.D. in Public Policy at Oxford. Previously director of strategic research and partnerships at CHAI at Berkeley, project manager and policy researcher at The Future Society in France, and UN youth delegate in climate negotiations.
Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.
I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?".
I remember how I first heard about EA.
The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”
It was October 2013, midterms week at Tufts University, and I was hustling between classes, focused on nothing but grades and graduation. But that disarmingly simple question gave me pause. It felt like an invitation to think bigger.
Curiosity drew me to the talk advertised on the flyer by some Oxford professor named Will MacAskill. I arrived to find just two other students in the room. None of us knew that Will would become so influential.
What followed was no ordinary lecture, but rather a life-changing conversation that has stayed with me for the past decade. Will challenged us to zoom out and consider how we could best use our limited time and talents to positively impact the world. With humility and nuance, he focused not on prescribing answers, but on asking the right questions.
Each of us left that classroom determined to orient our lives around doing the most good. His talk sent me on a winding career journey guided by this question. I dabbled in climate change policy before finding my path in AI safety thanks to 80K's coaching.
Ten years later, I’m still asking myself that question Will posed back in 2013: How can I use my career to do the most good? It shapes every decision I make. (I'm arguably a bit too obsessed with it!). I know countless others can say the same.
So thank you, Will, for inspiring generations of people with your catalytic question. The ripples from that day continue to spread. Excited for what you'll do next!
I've used the "Calm me" feature multiple times. I find it very easy to use during the day - taking just a few minutes off. I don't have panic attacks but found it helpful to have a tool to reduce stress. I found it especially helpful around the release of GPT-4 and dealing with lots of worries about the speed of AI progress then. After a couple of exercises, I could go back to work and focus again on my AI governance work with renewed resolve.
I'm very supportive of MindEase growth and focus on panic attacks, but honestly found it very useful as a general "relaxing and calming down" app.
My quick initial research:
The UK's influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company's origin. This control stems from DeepMind's location in the UK (jurisdiction principle), which mandates its compliance with the country's stringent data protection laws such as the UK GDPR. Additionally, the UK's Information Commissioner's Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government's interest in AI regulation and DeepMind's work with sensitive healthcare data further subjects the company to UK regulatory oversight.
However, the recent fusion of DeepMind with Google Brain, an American entity, may reduce the UK's direct regulatory influence. Despite this, the UK can still impact DeepMind's operations via its general AI policy, procurement decisions, and data protection laws. Moreover, voices like Matt Clifford, the founder and CEO of Entrepreneur First, suggest a push for greater UK sovereign control over AI, which could influence future policy decisions affecting companies like DeepMind.
I'm looking for insights on the potential regulatory implications this could have, especially in relation to the UK's AI regulation policies.
This post is beautiful, rational, and useful - thank you!
As the beginning of a reply to the question "What does a “realistic best case transition to transformative AI” look like?", we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.
Congrats on launching cFactual; it sounds great!
Exploring how you can help launch small or mega projects could also be interesting. If we expect this century or decade to be "wild", the EA community will create many new organizations and projects to deal with new challenges. It would be great to help these projects have a solid ToC, governance structure, etc., from the beginning. I understand that these projects may be on a slightly longer timeline (e.g. "the first year of the creation of a new AI governance organization...") but it could be great. I'd personally feel more confident about launching a new large project if I had cFactual to help!
(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn't super practical.
I agree with Adam here about the fact that it's better to host all attendees in one place during retreats.
However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.
Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it.
I agree that these decisions are going in the right direction. I think their resignations should have been given earlier given the severity of the conflicts of interest with FTX, the problem of their judgments over the situations, and their own FTX judicial problems.
(I still appreciate Nick and Will as individuals and value immensely their contribution to the fields)