I mentioned a few months ago that I was planning to resign from the board of EV
UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated
with FTX and related topics, which has ended up being a large proportion of
board business. (This is because the recusal affected not just decisions that
were directly related to the collapse of FTX, but also many other decisions for
which the way EV UK has been affected by the collapse of FTX was important
context.) I know I initially said that I’d wait for there to be more capacity,
but trustee recruitment has moved more slowly than I’d anticipated, and with the
ongoing recusal I didn’t expect to add much capacity for the foreseeable future,
so it felt like a natural time to step down.
It’s been quite a ride over the last eleven years. Effective Ventures has grown
to a size far beyond what I expected, and I’ve felt privileged to help it on
that journey. I deeply respect the rest of the board, and the leadership teams
at EV, and I’m glad they’re at the helm.
Some people have asked me what I’m currently working on, and what my plans are.
This year has been quite spread over a number of different things, including
fundraising, helping out other EA-adjacent public figures, support for GPI, CEA
and 80,000 Hours, writing additions to What We Owe The Future and helping with
the print textbook version of utilitarianism.net that’s coming out next year.
It’s also personally been the toughest year of my life; my mental health has
been at its worst in over a decade, and I’ve been trying to deal with that, too.
At the moment, I’m doing three main things:
- Some public engagement, in particular around the WWOTF paperback and foreign
language book launches and at EAGxBerlin. This has been and will be lower-key
than the media around WWOTF last year, and more focused on in-person events; I’m
also more focused on fundraising than I was before.
- Research into "trajectory changes”: in particular, ways of increasing the
wellbeing of future generations other than 'standard' existential risk
mitigation strategies, in particular on issues that arise even if we solve AI
alignment, like digital sentience and the long reflection. I’m also doing some
learning to try to get to grips on how to update properly on the latest
developments in AI, in particular with respect to the probability of an
intelligence explosion in the next decade, and on how hard we should expect AI
alignment to be.
- Gathering information for what I should focus on next. In the medium term, I
still plan to be a public proponent of EA-as-an-idea, which I think plays to my
comparative advantage, and because I’m worried about people neglecting “EA qua
EA”. If anything, all the crises faced by EA and by the world in the last year
has reminded me of just how deeply I believe in EA as a project, and how the
message of taking a thoughtful, humble, and scientific approach to doing good is
more important than ever. The precise options I’m considering are still quite
wide-ranging, including: a podcast and/or YouTube show and/or substack; a book
on effective giving; a book on evidence-based living; or deeper research into
the ethics and governance questions that arise even if we solve AI alignment. I
hope to decide on that by the end of the year.
(Clarification about my views in the context of the AI pause debate)
I'm finding it hard to communicate my views on AI risk. I feel like some people
are responding to the general vibe they think I'm giving off rather than the
actual content. Other times, it seems like people will focus on a narrow snippet
of my comments/post and respond to it without recognizing the context. For
example, one person interpreted me as saying that I'm against literally any AI
safety regulation. I'm not.
For a full disclosure, my views on AI risk can be loosely summarized as follows:
* I think AI will probably be very beneficial for humanity.
* Nonetheless, I think that there are credible, foreseeable risks from AI that
could do vast harm, and we should invest heavily to ensure these outcomes
don't happen.
* I also don't think technology is uniformly harmless. Plenty of technologies
have caused net harm. Factory farming is a giant net harm that might have
even made our entire industrial civilization a mistake!
* I'm not blindly against regulation. I think all laws can and should be viewed
as forms of regulations, and I don't think it's feasible for society to exist
without laws.
* That said, I'm also not blindly in favor of regulation, even for AI risk. You
have to show me that the benefits outweigh the harm
* I am generally in favor of thoughtful, targeted AI regulations that align
incentives well, and reduce downside risks without completely stifling
innovation.
* I'm open to extreme regulations and policies if or when an AI catastrophe
seems imminent, but I don't think we're in such a world right now. I'm not
persuaded by the arguments that people have given for this thesis, such as
Eliezer Yudkowsky's AGI ruin post.