This is a special post for quick takes by Sam Clarke. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some hot takes on AI governance field-building strategy
More people should consciously upskill as ‘founders’, i.e. people who form and lead new teams/centres/etc. focused on making AI go well
A case for more founders: plausibly in crunch time there will be many more people/teams within labs/govs/think-tanks/etc. that will matter for how AI goes. Would be good if those teams were staffed with thoughtful and risk-conscious people.
What I think is required to be a successful founder:
Strong in strategy (to steer their team in useful directions), management (for obvious reasons) and whatever object level work their team is doing
Especially for teams within existing institutions, starting a new team requires skill in stakeholder management and consensus building.
Concrete thing you might consider doing: if you think you might want to be a founder, and you agree with the above list of skills, think above how to close your skill gaps
More people should consciously upskill for the “AI endgame” (aka “acute risk period” aka “crunch time”). What might be different in the endgame and what does this imply about what people should do now?
Lots of ‘task force-style advising’ work
→ people should practise it now
Everyone will be very busy, especially senior people, so it won’t work as well to just defer
→ build your own models
More possible to mess things up real bad
→ start thinking harder about worst-case scenarios, red-teaming, etc. now, even if it seems a bit silly to e.g. spend time tightening up your personal infosec
The world may well be changing scarily fast
→ practice decision-making under pressure and uncertainty. Strategy might get even harder in the endgame
Being able to juggle 6 different kinds of things might be more valuable than being able to do one thing really well, because there might just be lots of different kinds of things to do (cf. ‘task force-style advising’)
→ specialise less? But specialisation tends to be pretty valuable, so I’m not sure this carries much weight overall
(Post 4/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some exercises for developing good judgement
I’ve spent a bit of time over the last year trying to form better judgement. Dumping some notes here on things I tried or considered trying, for future reference.
Jump into the mindset of “the buck stops at me” for working out whether some project takes place, as if you were the grantmaker having to make the decision. Ask yourself: “wait, should this actually happen?”[1]
(Rather than “does anything jump out as incorrect” or “do I have any random comments/ideas”—which are often helpful mindsets to be in when giving feedback to people, but don’t really train the core skill of good judgement.)
I think forecasting trains a similar skill to this. I got some value from making some forecasts in the Metaculus Beginners’ Tournament.
Find Google Docs where people (whose judgement you respect) have left comments and an overall take on the promisingness of the idea. Hide their comments and form your own take. Compare. (To make this a faster process, pick a doc/idea where you have enough background knowledge to answer without looking up loads of things).
Ask people/orgs for things along the lines of [minimal trust investigations | grant reports | etc.] that they’ve written up. Do it yourself. Compare.
Do any of the above with a friend; write your timeboxed answers then compare reasoning.
Find Google Docs where people (whose judgement you respect) have left comments and an overall take on the promisingness of the idea. Hide their comments and form your own take. Compare. (To make this a faster process, pick a doc/idea where you have enough background knowledge to answer without looking up loads of things)
An effective mental health intervention, for me, is listening to a podcast which ideally (1) discusses the thing I'm struggling with and (2) has EA, Rationality or both in the background. I gain both in-the-moment relief, and new hypotheses to test or tools to try.
Esp since it would be scalable, this makes me think that creating an EA mental health podcast would be an intervention worth testing - I wonder if anyone is considering this?
In the meantime, I'm on the look out for good mental health podcasts in general.
This does sound like an interesting idea. And my impression is that many people found the recent mental health related 80k episode very useful (or at least found that it "spoke to them").
Maybe many episodes of Clearer Thinking could also help fill this role?
Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?
Though starting a podcast is pretty low-cost, so it'd be quite reasonable to just try it without doing that sort of research first.
Incidentally, that 80k episode and some from Clearer Thinking are the exact examples I had in mind!
Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?
As a step towards this, and in case any other find it independently useful, here are the episodes of Clearer Thinking that I recall finding helpful for my mental health (along with the issues they helped with).
#11 Comfort Languages and Nuanced Thinking (for thinking through I what need, and what loved ones need, in difficult times)
#21 Antagonistic Learning and Civilization (had some useful thoughts about how education has taught me that breaking rules makes me bad, whereas in reality, breaking rules is just a cost to include in my calculation of what the best action is)
#22 Self-Improvement and Research Ethics (getting more traction on why my attempts at self-improvement often don't work)
#25 Happiness and Hedonic Adaptation (hedonic adaptation seems like a very important concept for living a happier life, and this is the best discussion of it that I've heard)
#26 Past / Future Selves and Intrinsic Values (I recall something being useful about how I relate to past and future me)
#43 Online and IRL Relationships (getting relationships are a big part of my happiness and this had a very dense collection of insights about how to do relationships well - other dense insights have come from reading Nonviolent Communication and doing Circling with partners)
#54 Self-Improvement and Behavior Change (lots of stuff, most important was realising that many "negative" behaviour patterns are actually bringing you some benefit in a convoluted way, and until you identify find a substitute for that benefit, they'll be very hard to change)
#60 Heaven and hell on earth (thinking about the value of "bad" mental states like anxiety and depression)
#65 Utopia on earth and morality without guilt (thinking through how I relate to my desire to do good, guilt vs bright desire; the handle of "clingy-ness" for a certain flavour of mental experiences)
#68 How to communicate better with the people in your life (getting more traction on why some social interactions leave me feeling disconnected/isolated)
I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).
However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.
Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.
One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.
I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.
I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]
The short answer is:
Overall, I'm very glad we released my episode. It ended up getting more positive feedback than I expected and my current guess is that in expectation it'll be sufficiently beneficial to the careers of other people similar to me that any damage to my own career prospects will be clearly worth it.
It was obviously a bit stressful to put basically everything I've ever been ashamed of onto the internet :P, but overall releasing the episode has not been (to my knowledge) personally costly to me so far.
My guess is that the episode didn't do much harm to my career prospects within EA orgs (though this is in part because a lot of the stuff I talked about in the episode was already semi-public knowledge w/in EA and any future EA employer would have learned about them before deciding to hire me anyway).
My guess is that if I want to work outside of EA in the future, the episode will probably make some paths less accessible. For example, I'm less sure the episode would have been a good idea if it was very important to me to keep U.S. public policy careers on the table.
[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow.
(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some key uncertainties in AI governance field-building
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
How best to find/upskill more people to do policy development work?
I think there are three main skillsets involved in policy development work:
Macrostrategy
“Traditional” policy development work (e.g. detailed understanding of how policymaking works in institution, to devise feasible policy action)
Impact focus (i.e. working to improve lasting impacts of AI in a scope-sensitive way)
A more concrete question is: which pillars to prioritise in talent search/selection vs upskilling. E.g. do you take people already skilled in X and Y and give them Z; X and Z and give them Y; etc. etc. ?
What are the most important profiles that aren’t currently being hired for, but nonetheless might matter?
Reasons why this seems important to get clarity on:
Focus on neglected aspects of the talent pipeline. People want to get hired, so will be trying to skill up for positions that are currently being hired for. Whereas for future positions—especially for positions that will never be “hired for” per se (e.g. leading a policy team that wouldn’t exist unless you pitched it), and “positions with a deadline”[1]—the standard career incentives to skill up for them aren’t as strong. Also, some people currently hiring are already trying to solve their own bottlenecks (e.g. designing efficient hiring processes to identify talent) whereas future people aren’t.[2]
Avoid myopically optimising the talent pipeline. The world will probably change a lot in the run up to advanced AI. This will affect the value of different talent pipeline interventions in three ways:
There will likely be more people interested in AI (governance). So, more people who want to do things, and hence more value in work that usefully leverages a large amount of labour.
The people who become interested may have different skills and inclinations, compared to the current talent pool. This will change the future comparative advantage of people we can currently find/upskill.
More concretely, you might think that people currently working on AI governance are disproportionately inclined towards macro/strategy, relative to the talent pool in, say, 5 years’ time. Optimising for ticking all the talent boxes by the end of the year might look like finding/upskilling people with deep knowledge in certain areas that we’re lacking (e.g. more lawyers). But if you instead think these people will just be drawn to the field once there are important questions they can answer, and that the community can usefully leverage their knowledge, this could suggest instead doubling down on building a community that’s excellent at strategy [I’m very uncertain about this particular line of reasoning, but think there might be some important thread to pull on here, more generally]
The nature of important work changes as we move into the AI endgame. E.g. probably less field-building, more public comms, more founding of new institutions, more policy development, etc.
To what extent should talent pipeline efforts treat AI governance as a (pre-)paradigmatic field?
More concrete versions of this question, that are all trying to get at the same thing:
On the current margin (now and in the future), how much of the most valuable work is crank-turn-y? By “crank-turn-y” I mean “work which can be delegated to sensible people even if they aren’t deeply integrated into the existing field”.
On the current margin (now and in the future), how high a premium should talent search/development efforts put on the macro/strategy aptitude?
On the current margin (now and in the future), how much of the most valuable work looks like contributing to an intellectual project that has been laid out (rather than doing the initial charting out of that intellectual project)?
Answers to these questions seem like they should affect: how quickly the field scales up, and how much we are trying to attract people who are excellent at crank-turn-y work vs strategy work. I lightly hold the intuition that erring on this question is a main way that this field could mess up.
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.
(Post 6/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some heuristics for prioritising between talent pipeline interventions
Explicit backchaining is one way to do prioritisation. I sometimes forget that there are other useful heuristics, like:
Cheap to pilot
E.g. doesn't require new infrastructure or making a new hire
Cost is easier to estimate than benefit, so lower cost things tend to be more likely to actually happen
Visualise some person or org has been actually convinced to trial the thing. Imagine the conversation with that decision-maker. What considerations actually matter for them?
Is there someone else who would do most of the heavy lifting?
(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Misc things it seems useful to do/find out
To inform talent development activities: talk with relevant people who have skilled up. How did they do it? What could be replicated via talent pipeline infrastructure? Generally talk through their experience.
Kinds of people to prioritise: those who are doing exceptionally well; those who have grown quite recently (might have better memory of what they did)
To inform talent search activities: talk with relevant people—especially senior folks—about what got them involved. This could feed into earlier stage talent pipeline activities
Case studies of important AI governance ideas (e.g. model evals, importance of infosec) and/or pipeline wins. How did they come about? What could be replicated?
How much excess demand is there for fellowship programs? Look into the strength of applications over time. This would inform how much value there is in scaling fellowships.
Figure out whether there is a mentorship bottleneck.
More concretely: would it be overall better if some of the more established AI governance folk spent a few more hours per month on mentorship?
Thing to do: very short survey asking established AI governance people how many hours per month they spend on mentorship.
Benefits of mentorship:
For the mentee: fairly light touch involvement can go a long way towards bringing them up to speed and giving them encouragement.
For the mentor: learn about fit for mentorship/management. Can be helpful for making object-level progress on work.
These benefits are often illegible and delayed in time, so a priori likely to be undersupplied.
If there’s a mentorship bottleneck, it might be important to solve ~now. The number of AI governance jobs is likely going to rise dramatically over the coming years—so having more thoughtful, risk-conscious people who are better placed to land those roles is more urgent than you might think if only considering having enough people by the acute risk period.
If there is a mentorship bottleneck how might one actually go about solving that? Obvious idea is to nudge potential mentors to consider:
Asking GovAI, who have been collecting expressions of interest for RAs, whether there’s anyone who might be a good fit as your mentee
Posting on the forum or otherwise broadcasting what kind of thing a potential mentee could do that might make you excited about mentoring them (e.g. write a review of report X, or write a memo on topic Y), and that (e.g.) if they do that they should send it to you, and you'll at least take a 30 minute call with them
Mentoring someone on a summer program (e.g. GovAI Fellowship, ERIs, SERI MAGS, HAIST/MAIA programs, AI Safety Camp, …)
Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I'm mostly recording them here for my own consolidation
EA as a bet on a general way of approaching how to do good, that is almost certainly wrong in at least some ways—rather than a claim that we've "figured out" how to do the most good (like, probably no one claims the latter, but sometimes newcomers tend to get this vibe). Different people in the community have different degrees of belief in the bet, and (like all bets) it can make sense to take it even if you still have a lot of uncertainty.
EA as about doing good on the current margin. That is, we're not trying to work out the optimal allocation of altruistic resources in general, but rather: given how the rest of the world is spending its money and time to do good, which approaches could do with more attention? Corollary: you should expect to see EA behaviour changing over time (for this and other reasons). This is a feature not a bug.
EA as diverse in its ways of approaching how to do good. Some people work on global health and wellbeing. Others on animal welfare. Others on risks from climate change and advanced technology.
These frames can also apply to any specific cause area.
*like, I remember talking to a few people who became more sympathetic when I used these frames.
I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.
(Post 5/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Laundry list of talent pipeline interventions
More AI governance groups/programs at universities
Run workshops on the most marginally valuable aptitudes
E.g. a macrostrategy workshop could look like: people tell stories about, concretely, things could go badly, and backchaining to what we should do
Run bootcamps on particularly important topics, e.g. compute
Help bring more people up to speed in most important areas
Create better resources for teaching low-hanging fruit skills
E.g. resources for learning how to do reasoning transparency—currently this is pretty hard to learn, but seems like one of the more teachable skills
E.g. compile zero-shot tips that occasionally people just miss, like "don't read books from cover to cover"
Tag those resources onto existing programs
AI-governance-careers.com
"Ambitious talent search"
The best way of predicting happiness and success on the job is to actually do the job -> hackathon type stuff. “This is the problem. Here’s the internet. Here’s some resources. Go nuts.”
Note to self: more detailed but less structured version of these notes here.
(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some hot takes on AI governance field-building strategy
(Post 4/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some exercises for developing good judgement
I’ve spent a bit of time over the last year trying to form better judgement. Dumping some notes here on things I tried or considered trying, for future reference.
I think this framing of the exercise might have been mentioned to me by Michael Aird.
This is a good tip! Hadn't thought of this.
An effective mental health intervention, for me, is listening to a podcast which ideally (1) discusses the thing I'm struggling with and (2) has EA, Rationality or both in the background. I gain both in-the-moment relief, and new hypotheses to test or tools to try.
Esp since it would be scalable, this makes me think that creating an EA mental health podcast would be an intervention worth testing - I wonder if anyone is considering this?
In the meantime, I'm on the look out for good mental health podcasts in general.
This does sound like an interesting idea. And my impression is that many people found the recent mental health related 80k episode very useful (or at least found that it "spoke to them").
Maybe many episodes of Clearer Thinking could also help fill this role?
Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?
Though starting a podcast is pretty low-cost, so it'd be quite reasonable to just try it without doing that sort of research first.
Incidentally, that 80k episode and some from Clearer Thinking are the exact examples I had in mind!
As a step towards this, and in case any other find it independently useful, here are the episodes of Clearer Thinking that I recall finding helpful for my mental health (along with the issues they helped with).
I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).
However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.
Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.
One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.
I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.
I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]
The short answer is:
[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow.
Thanks, Howie! Sent you an email.
(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some key uncertainties in AI governance field-building
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
How best to find/upskill more people to do policy development work?
What are the most important profiles that aren’t currently being hired for, but nonetheless might matter?
Reasons why this seems important to get clarity on:
To what extent should talent pipeline efforts treat AI governance as a (pre-)paradigmatic field?
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.
(Post 6/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some heuristics for prioritising between talent pipeline interventions
Explicit backchaining is one way to do prioritisation. I sometimes forget that there are other useful heuristics, like:
(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Misc things it seems useful to do/find out
Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I'm mostly recording them here for my own consolidation
These frames can also apply to any specific cause area.
*like, I remember talking to a few people who became more sympathetic when I used these frames.
I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.
Agreed, thanks for the pushback!
(Post 5/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Laundry list of talent pipeline interventions
Note to self: more detailed but less structured version of these notes here.