Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.

I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.

Some things I've been thinking a lot about include:

  1. How to make sure AGI benefits everyone
  2. Law and AI development
  3. Law's relevance for AI policy
  4. Whether law school makes sense for EAs
  5. Social justice in relation to effective altruism

I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D

[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]

45

0
0

Reactions

0
0
Comments68
Sorted by Click to highlight new comments since: Today at 11:26 PM
  1. Social justice in relation to effective altruism

I've been thinking a lot about this recently too. Unfortunately I didn't see this AMA until now but hopefully it's not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various "isms" or "phobias", or worse, get demoted/fired/deplatformed) will come to affect EA as well.

I can see at least two ways of this happening to EA:

  1. Whatever social dynamic is responsible for this happening within SJ applies to EA as well, and EA will become like SJ in this regard for purely internal reasons. (In this case EA will probably come to have a different set of politically correct beliefs from SJ that one must profess faith in.)
  2. SJ comes to control even more of the cultural/intellectual "high grounds" (journalism, academia, K-12 education, tech industry, departments within EA organizations, etc.) than it already does, and EA will be forced to play by SJ's rules. (See second link above for one specific scenario that worries me.)

From your answers so far it seems like you're not particularly worried about this. If you have good reasons to not worry about this, please share them so I can move on to other problems myself.

(I think SJ is already actively doing harm because it pursues actions/policies based on these politically correct beliefs, many of which are likely wrong but can't be argued about. But I'm more worried about EA potentially doing this in the future because EAs tend to pursue more consequential actions/policies that will be much more disastrous (in terms of benefits foregone if nothing else) if they are wrong.)

Thanks Wei! This is a very thoughtful comment.

I completely agree that we should be wary of those aspects of SJ as well. I'm not sure that I'm "less" worried about it than you; I do worry about it. However, I have not seen much of this behavior in the EA community so I am not immediately worried and have some reasons to be fairly optimistic in the long run:

  1. Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.
  2. Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly "canceled" are quite small from an EA perspective.
  3. Heavy influence of and connection to philosophy selects for openness norms as well.
  4. Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

To restate, I would definitely be pretty wary of any attempt to reform EA in a way that seriously endangered norms of civility, open debate, intellectual inquiry, etc. as they currently are practiced. I actually think we do a very good job as a movement of balancing these goals. This is part of why I currently spend more time in EA than SJ.

Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.

This does not reassure me very much, because academia used to have strong openness norms but is quickly losing them or has already lost them almost everywhere, and it seems easy for founders to lose their influence (i.e., be pushed out or aside) these days, especially if they do not belong to one of the SJ-recognized marginalized/oppressed groups (and I think founders of EA mostly do not?).

Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly “canceled” are quite small from an EA perspective.

One could say that seeking knowledge and maximizing profits are somewhat incongruous with these things, but that hasn't stopped academia and corporations from adopting harmful SJ practices.

Heavy influence of and connection to philosophy selects for openness norms as well.

Again it doesn't seem like openness norms offer enough protection against whatever social dynamics is operating.

Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

Surely people in academia and business also had the motivation to avoid the most harmful practices, but perhaps didn't have the ability? Why do you think that EA has the ability? I don't see any evidence, at least from the perspective of someone not privy to private or internal discussions, that any EA person has a good understanding of the social dynamics driving adoption of the harmful practices, or (aside from you and a few others I know who don't seem to be close to the centers of EA) are even thinking about this topic at all.

Example of institutions being taken over by cancel culture and driving out their founders:

Like Andrew Sullivan, who joined Substack after parting ways with New York magazine, and Glenn Greenwald, who joined Substack after resigning from The Intercept, which he co-founded, Yglesias felt that he could no longer speak his mind without riling his colleagues. His managers wanted him to maintain a “restrained, institutional, statesmanlike voice,” he told me in a phone interview, in part because he was a co-founder of Vox. But as a relative moderate at the publication, he felt at times that it was important to challenge what he called the “dominant sensibility” in the “young-college-graduate bubble” that now sets the tone at many digital-media organizations.

I think I agree that academic philosophy tends to have above-average openness norms--but note that academic philosophy has mostly lost them at this point, at least when it comes to topics related to SJ. I can provide examples of this on request; there are plenty to see on Daily Nous.

In a comment from October 2019, Ben Pace stated that there is currently no actionable policy advice the AI safety community could give to the President of the United States. I'm wondering to what extent you agree with this.

If the US President or an influential member of Congress was willing to talk one-on-one with you for a couple hours on the issue of AI safety policy, what advice would you give them?

Hm, I haven't thought about this particular issue a lot. I am more focused on research and industry advocacy right now than government work.

I suppose one nice thing would be to have an explicit area of antitrust leniency carved out for cooperations on AI safety.

(I really should ask you some questions about AI risk and policy/strategy/governance ("Policy" from now on). I was actually thinking a lot about that just before I got sidetracked by the SJ topic.)

  1. My understanding is that aside from formally publishing papers, Policy researchers usually communicate with each other via private Google Docs. Is that right? Would you find it useful to have a public or private forum for Policy discussion similar to the AI Alignment Forum? See also Where are people thinking and talking about global coordination for AI safety?
  2. In the absence of a Policy Forum, I've been posting Policy-relevant ideas to the Alignment Forum. Do you and other Policy researchers you know follow AF?
  3. In this comment I wrote, "Worryingly, it seems that there’s a disconnect between the kind of global coordination that AI governance researchers are thinking and talking about, and the kind that technical AI safety researchers often talk about nowadays as necessary to ensure safety." Would you agree with this?
  4. I'm interested in your thoughts on The Main Sources of AI Risk?, especially whether any of the sources/types of AI risk listed there are new to you, if you disagree with any of them, or if you can suggest any additional ones.

What is you high-level on take on social justice in relation to EA?

(For relevant background, I spent ~all of my undergraduate career heavily involved in social justice before discovering EA in law school and then switching primarily to EA)

A bunch of high-level thoughts:

  1. EA is overall the better ideology/movement due to higher-quality reasoning, prioritization, embrace of economics, and explicit welfarism.
  2. There is probably lots of potential for useful alliances between EAs and SJ people on poverty and animal welfare issues, but I think certain SJ beliefs and practices unfortunately frustrate these. Having EAs who can communicate EA ideas to SJ audiences to form these alliances is both valuable and, in my experience, possible.
  3. SJ has captured a huge amount of well-educated people who want to do good in the world. From a strategic perspective, this is both a problem and an opportunity. It is a problem because, in my on-campus experience, there is somewhat strong lock-in to an ideology after undergrad, after which point it is hard to "convert" people or persuade them to act outside their ideology. Thus, the prominence of SJ on colleges frustrates the work of post-college EA movement-building/ -growth. However, I think we have a much more compelling message, ideology, and community, and with sustained movement growth at colleges could represent a plausible alternative attractive worldview to sell to undergrads who are interested in improving the world but also identify the same weaknesses in SJ that I did.
  4. SJ has a lot of approximately true insights into ways that social dynamics can cause harm, but many of them are not compelling EA causes.
  5. EA should probably do a better job taking seriously and leftist critiques of Western philanthropy in the Global South, and have better responses to them than citing GiveWell cost-effectiveness analyses. (To be clear, I think many people do this; it should just be a salient talking point because it's the most common objection I heard.)
  6. Overall, I would recommend a soft embrace of SJ, which is nothing more than accepting the valid parts of the ideology while also retaining firm cause prioritization. We should also use SJ insights to build a larger, more inclusive movement. We should do all of this while also being careful not to alienate moderates and conservatives who are sympathetic to EA. Again, in my experience at Harvard, I had success communicating to both groups—I sold some very progressive friends on EA while also recruiting some very conservative donors. Cause prioritization is our strength in this sense, since the issues most likely to cause ideological conflicts are also probably not major causes by mainstream EA analysis.

I've been trying to figure out what I find a little uncomfortable about 1-3, as someone who also has links to both communities. I think it's that I personally find it more productive to think about both as frameworks/bodies of work + associated communities, more so than movements, where it feels here like these are being described as tribes (one is presented as overall better than the other; they are presented as competing for talent; there should be alliances). I acknowledge however, that in both EA/SJ, there are definitely many who see these more in the movement/tribe sense.

Through my framing, I find it easier to imagine the kinds of constructive engagements I would personally like to see - e.g. people primarily thinking through lens A adopting valuable insights and methodologies from lens B (captured nicely in your point 4). But I think this comes back to the oft-debated question (in both EA and SJ) of whether EA/SJ is (a) a movement/tribe or (b) a set of ideas/frameworks/body of knowledge. I apologise if I'm misrepresenting any views, or presenting distinctions overly strongly; I'm trying to put my finger on what might be a somewhat subtle distinction, but one which I think is important in terms of how engagement happens.

On the whole I agree with the message that engaging constructively, embracing the most valuable and relevant insights, and creating a larger, more inclusive community is very desirable.



On a technical note (maybe outside the context of this conversation or comparing EA/SJ) and a slight tangent, I think calling EA and SJ movements is useful and informative if you're level of discourse is thinking about broad-level EA movement building. I know the phrase movement has a lot of connotations in common discourse, but I think at it's core a movement is a group of people achieving goals through collective action.

These groups of people are often tribes, and tribal ties motivates movement membership but they are distinct things.

The value of this categorization: EA is a much more consolidated, controlled and purposeful movement than SJ. SJ is more diffuse, lacks centralization but is quite recognizable in terms of topics and kinds of discourse.

Given this diversity, appeals to broad SJ movement tend to reduce complexity of arguments (low fidelity models) that wouldn't work well with EA high fidelity models. So I think it's useful to know this to know why we engaging with SJ on a movement level is not the ideal situation. So on that high level thinking it's probably useful to think about movements to get a big picture of the situation.

Of course, SJ (and EA) are not only movements. They are also communities, bodies of knowledge, networks etc. as you mention. These aspects feed into the structure of the movement in important ways. If your level of discourse is different (i.e. thinking about specific cases for collaboration or comparing the frameworks used by the two movements) then thinking on this level is useful.

Thanks Vaidehi, these are very good points.

I agree that SJ is more diffuse and less central - I think this is one of the reasons thinking of it in terms of a movement that one might ally with is a little unnatural to me. I also agree that EA is more centralised and purposeful.

Your point that about what level of discourse suggests what kind of engagement is also a good one. I think this also links to the issue that (in my view) it's in the nature of EA that there's a 'thick' and a 'thin' version of EA in terms of the people involved. Here 'thick' is a movement of people who self-identify as EA and see themselves as part a strong social and intellectual community, and who are influenced by movement leaders and shapers.

Then there's a 'thin' version that includes people who might do one or multiple of the following (a) work in EA-endorsed cause areas with EA-compatible approaches (b) find EA frameworks and literature useful to draw on (among other frameworks) (c) are generally supportive of or friendly towards some or most of the goals of EA, without necessarily making EA a core part of their identity or seeing themselves as being part of a movement. With so many people who interact with EA working primarily in cause areas rather than 'central movement' EA per se, my sense is this 'thin' EA or EA-adjacent set of people is reasonably large.

It might make perfect sense for 'thick EA' leaders to think of EA vs SJ in terms of movements, alliances, and competition for talent. While at the same time, this might be a less intuitive and more uncomfortable way for 'thin EA' folk to see the interaction being described and playing out. While I don't have answers, I think it's worth being mindful that there may be some tension there.


Thanks all! This is a good, useful discussion. I wanted to clarify slightly but what I mean when I say EA is the "better" ideology. Mainly, I mean that EA is better at guiding my actions in a way that augments my ethical impact much more than SJ does. They're primarily rivalrous only insofar as I can only make a limited number of ethical deliberations per day, and EA considerations more strongly optimize for impact than SJ considerations.

"There are definitely many who see these more in the movement/tribe sense" - For modern social justice this tends to focus on who is a good or bad person, while for EA this tends to focus more on who to trust. (There's a less dominant strand of thought within social justice that says we shouldn't blame individuals for systematic issues, but it's relatively rare). EA makes some efforts towards being anti-tribal, while social justice is less worried about the downsides of being tribal.

Regarding point 3): I don't think EA necessarily has a more compelling ideology. One of the big differences I see between the two movements is that SJ is an extremely inclusive movement (basically by definition) when it comes to participation within the movemrnt: who can be a part of the movement, make a difference, and contribute even if the application of this principle may be flawed.

This seems pretty different from EA, and depending on your entry point to EA could put people off and I'm not quite sure how to reconcile that.

Do you see this as an issue (and on what scale)? Do you have any sense of how to reconcile this issue?

Yeah, EA is likely less compelling when this is defined as feeling motivating/interesting to the average person at the moment, although it is hard to judge since EA hasn't been around for anywhere near as long. Nonetheless, many of the issues EAs care about seem way too weird for the average person, then again if you look at feminism, a lot of the ideas were only ever present in an overly academic form. Part of the reason why they are so influential now is that they have filtered down into the general population in a simpler form (such as "girl power", "feeling good, rationality bad"). Plus social justice is more likely to benefit the people supporting it in the here and now than EA which focuses more on other countries, other species and other times which is always a tough sell.

SJ is an extremely inclusive movement (basically by definition)

I'm generally wary of argument by definition. Indeed, SJ is very inclusive to members of a racial minority or those who are LGBTI, but is very much not when it comes to ideological diversity. And some strands can be very unwelcoming to members of majorities. So it's much more complex than that.

I phrased that poorly - I added the bit even if the principle is not applied perfectly to cover what you mentioned but I think the more accurate statement would be that one of SJ's big appeals is that it states to be inclusive.

I do basically think that EA could learn a lot of things from SJ in terms of being an inclusive movement. I think it's possible that there's a lot of value to be had (in EA terms) in continuing to increase the inclusivity of EA.

I agree that part of the issue is who feels empowered to make a difference. Part of this is because SJ, in my view, often focuses on things that are not very marginally impactful, but to which many people can contribute. However, I am very excited about recent efforts within the EA community to support a variety of career paths and routes to impact beyond the main ones identified by main EA orgs.

Thanks for listing this as one of your five topics of interest and thanks to everyone for insightful comments.

I do basically think that EA could learn a lot of things from SJ in terms of being an inclusive movement.

I wholeheartedly agree.

Beyond movement building & inclusivity, I'd be curious to hear about other domains where you think that EA could learn from the social justice movement/philosophy? E.g., in terms of the methodologies and academic disciplines that the respective movements tend to rely on, epistemic norms, ethical frameworks, etc.

Beyond movement building & inclusivity, I think it makes sense for EA as a movement to keep their current approach because it's been working pretty well IMO.

I think the thing EAs as people (with a worldview that includes things beyond EA) might want to consider—and which SJ could inform—is the demands that historical injustices of, e.g., colonialism, racism, etc. make on us. I think those demands are plausibly quite large and failure to satisfy them could constitute a ongoing moral catastrophe. Since they're not welfarist, they're outside the scope of EA as it currently exists. But for moral uncertainty reasons I think many people should think about them.

What do you actually do at work?

It’s useful to think of the OpenAI Policy Team’s work as falling into a few buckets:

  1. Internal advocacy to help OpenAI meet its policy-relevant goals.
  2. External-facing research on issues in AI policy (e.g., 1 2 3 4)
  3. Public and private advocacy on issues in AI policy with a variety of public and industry actors

Most of my work so far has been focused on 1 and 2, and less on 3. That work is largely what you would expect it to look like: a lot of time spent reviewing academic literature or primary sources; drafting papers; soliciting comments and feedback; and discussing with colleagues.They also involve meeting with other internal stakeholders, which is an especially exciting part of my job because the resulting outputs will be very technically-informed.

I plan to do some more of 3 this year, which will generally be helping coordinate discussion on some issues in AI economics. Stay tuned for more info on that!

Does law school make sense for EAs?

I should also mention that I am generally excited to chat with EAs considering law school. PM me if interested, or join the Effective Altruism & Law Facebook Group.

The 80,000 Hours career review on UK commercial law finds that "while almost 10% of the Members of Parliament are lawyers, only around 0.6% have any background in high-end commercial law." I have been unable to find any similar analysis for the US. Do you know of any?

I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:

In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school."

In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best ways to get positions in government & policy, which is one of our top priority areas.”

It's also mentioned in this article that Congress has a lot of HLS graduates.

TL;DR, I think EAs should probably use the following heuristics if they are interested in some career for which law school is a plausible path:

  1. If you can get into a T3 law school (Harvard, Yale, Stanford), have a fairly strong prior that it's worth going.
  2. If you can get into a T6 law school (Columbia, Chicago, NYU), probably take it.
  3. If you can get into a T14 law school, seriously consider it. But employment statistics at the bottom of the T14 are very different from those at the top.
  4. Be wary of things outside the T14.

In general, definitely carefully research employment prospects for the school you're considering.

Other notes:

  1. The 80K UK commercial law ETG article significantly underestimates how much US-trained lawyers can make. Starting salary for commercial law firms in the US are $190K. It is pretty easy to get these jobs from T6 schools. Of course, American students will probably have higher debt burden.
  2. Career dissatisfaction in biglaw firms is high. The hours can be very brutal. Attrition is high. Nevertheless, a solid majority of HLS lawyers (including BigLaw lawyers) are satisfied with their career and would recommend the same career to newer people. Of course, HLS lawyers are not representative of the profession as a whole.
  3. ROI on the mean law schools is actually quite good, though should be adjusted for risk since the downside (huge debt + underemployment) is huge.
  4. If you're going to ETG, try to get a bunch of admissions offers and heavily negotiate downwards to get a good scholarship offer within the T6 or ~T10. Chicago, NYU, and Columbia all offer merit scholarships; if you want to ETG, these seem like good bets.
  5. If you're going to ETG, probably work in Texas, where you get NY salaries at a much lower cost of living and tax burden.
  6. If you're going into law for policy or other non-ETG reasons, go to a law school with a really good debt forgiveness program (unless you get a good scholarship elsewhere). HLS's LIPP is quite good; Yale has an even better similar program.
  7. You should also account for the possibility of economic downturn; many law firms stopped hiring during the '08 crash.
  8. If you're an undergrad, you have a high leverage over your law school potential choices. 80% of your law school admissions decision will be based on your GPA and LSAT scores. Carefully research the quartiles for your target schools and aim for at least median, ideally >75%ile. The LSAT is very learnable with focused study. Take a formal logic course and LSAT courses if you can afford them. This will help tremendously in law school scholarship negotiations.

Thanks for this helpful comment!

Curious what you think of Thiel's experience with law school and BigLaw (a):


This is not what I set out to do when I began my career. When I was sitting where you are, back in 1989, I would’ve told you that I wanted to be a lawyer. I didn’t really know what lawyers do all day, but I knew they first had to go to law school, and school was familiar to me.
I had been competitively tracked from middle school to high school to college, and by going straight to law school I knew I would be competing at the same kinds of tests I’d been taking ever since I was a kid, but I could tell everyone that I was now doing it for the sake of becoming a professional adult.
I did well enough in law school to be hired by a big New York law firm, but it turned out to be a very strange place. From the outside, everybody wanted to get in; and from the inside, everybody wanted to get out.
When I left the firm, after seven months and three days, my coworkers were surprised. One of them told me that he hadn’t known it was possible to escape from Alcatraz. Now that might sound odd, because all you had to do to escape was walk through the front door and not come back. But people really did find it very hard to leave, because so much of their identity was wrapped up in having won the competitions to get there in the first place.
Just as I was leaving the law firm, I got an interview for a Supreme Court clerkship. This is sort of the top prize you can get as a lawyer. It was the absolute last stage of the competition. But I lost. At the time I was totally devastated. It seemed just like the end of the world.
About a decade later, I ran into an old friend. Someone who had helped me prepare for the Supreme Court interview, whom I hadn’t seen in years. His first words to me were not, you know, “Hi Peter” or “How are you doing?” But rather, “So, aren’t you glad you didn’t get that clerkship?” Because if I hadn’t lost that last competition, we both knew that I never would have left the track laid down since middle school, I wouldn’t have moved to California and co-founded a startup, I wouldn’t have done anything new.
Looking back at my ambition to become a lawyer, it looks less like a plan for the future and more like an alibi for the present. It was a way to explain to anyone who would ask – to my parents, to my peers, and most of all to myself – that there was no need to worry. I was perfectly on track. But it turned out in retrospect that my biggest problem was taking the track without thinking really hard about where it was going.

Yeah, I think it's definitely true that some lawyers feel trapped in their current career sometimes. Law is a pretty conservative profession and it's pretty hard to find advice for non-traditional legal jobs. I myself felt this: it was a pretty big career risk to do an internship at FHI the summer after 2L.

(For context, summer after 2L is when most people work at the firm that eventually hires them right after law school. So, I would have had a much harder time finding a BigLaw job if the whole AGI policy thing didn't work out. The fact that I worked public interest both summers would have been a serious signal to firms that I was more likely than average to leave BigLaw ASAP.)

I think EAs can hedge against this if they invest in maintaining ties to the EA community, avoiding sunk-cost and status quo biases, and careful career planning.

This is a pretty obvious point, but I find it really hard to think about comparative advantage. Naively, if all someone know about themself is that they're admitted to T14 but not T3, it tells them that they have less opportunities in Law. But it also tells them that they have less opportunities outside of law as well.

So it seems really hard to think about this, even in statistical aggregate.

I think it might be somewhat more complicated. As far as I know, the LSAT+GPA measure is actually a pretty strong predictor of law school performance as far as standardized tests go. But there's some controversy in the literature about how much law school grades matter for success. Good law schools also have much higher bar passage rates, though there could be confounding factors there too.

In general, the legal market seems somewhat weird to me. E.g., it's pretty easy for T3 students to get a BigLaw job, but often hard for students near the bottom of the T14 too. I do not understand why firms don't just hire such students and thereby lower wages, which are very high. My best guess is, Hansonianly, that there's a lot of signaling going on, where firms try to signal their quality by hiring only T6 law students. Also, I imagine the T6 credential is important for recruiting clients, which is very important to BigLaw success.

But query how much of this matters if you want to do a non-ETG law path.

[Didn't mean to comment this]

What impactful career paths do you think law school might prepare you particularly well for, besides ETG and AI Policy? If an EA goes to law school and discovers they don't want to do ETG or AI Policy, where should they look next?

Do these options mostly look like "find something idiosyncratic and individual that's impactful", or do you see any major EA pipelines that could use tons of lawyers?

Yeah, I think I'm pretty bullish on JDs more than the average EA because it's very useful for a ton of careers. Like, a JD is an asset for pretty much any career in government, where you can work on a lot of EA problems, like:

(Of course, lawyers can usefully work on these outside of government as well.)

I think EA-relevant skills in economics might be particularly valuable in some fields, like governmental cost-benefit analysis.

Of course, people in government can have a lot of impact on problems that most EAs don't work on due to the amount of influence they have.

I also think that there might be opportunities for lawyers to help grow/structure/improve the EA movement, like:

  • Estate planning (e.g., help every EA who wants one get a will or trust that will give a lot of their estate to EA charities)
  • Setting up nonprofits and other organizations
  • Tax help
  • Immigration help for EA employers
  • Creating weird entities or financial instruments that help EAs achieve their goals (e.g., 1, 2, 3)

If I was not doing AI policy, I might write up a grant proposal to spend my time doing these things in CA.

What do you think are the most pressing "mainstream" ethical issues in AI? (fairness, interpretability, privacy, attention design, etc.)

How do you think the public interest tech movement (which encompasses tech-related public policy, doing software development or data science for social good, etc.) could be more effective?

I think there's a lot of issues that have applicability to both short- and long-term concerns, like:

Relatedly, this is why people who can't immediately work for an EA-aligned org on "long-term" AI issues can build both useful career capital and do useful work by working in more general AI policy.

For the second question, a pretty boring EA answer: I would like to see more people in near-term AI policy engage in explicit and quantifiable cause prioritization for their work. I think that, as EAs generally recognize, the impact between these things probably varies quite a lot. That should guide which questions people work on.

That's a really good suggestion! Do you know of any attempts at cause prioritization in near-term AI policy? I think most AI policy wonks focus on near-term issues, so publishing a stack ranking could be really influential.

I don't! Would be interesting to see! From an EA perspective, though, flowthrough effects on long-term stuff might dominate the considerations.

I'm not sure whom this ranking would be relevant for, though. If you're interested in basic research on AI ethics, you'd want to know whether doing research on fairness or privacy is more impactful on the margin. But engineers developing AI applications have to address all ethical issues simultaneously; for example, this paper on AI in global development discusses all of them. As an engineer deciding what project to work on, I'd have to know for which causes deploying AI would make the greatest difference.

I'd imagine there's an audience for it!

I think so too. I created a question here to solicit some preliminary thoughts, but it would be cool if someone could do more thorough research.

Are the best AI Policy opportunities concentrated in San Francisco? Or are there comparable opportunities in e.g. Washington DC, New York, Boston, Chicago, LA?

How much would your career suffer if you weren't willing to live in the Bay Area?

Lots of good stuff is in SF (OpenAI, PAI, Open Phil). However, there are also very good options for EAs in DC (CSET, US government stuff) and UK (FHI, DeepMind, CSER, CFI).

You can also build good career experience in general AI Policy work (i.e., not AGI- or LTF-focused) in a pretty big number of areas, like Boston (Berkman-Klein, MIT, FLI) or NYC (AI Now). I don't know of AI-specific stuff in Chicago or LA, but of course they both have good universities where you could probably do AI policy research.

See also my replies to this comment. :-)

Which actors do you think one should try to influence to make sure that a potential transition to a world with AGI goes well (e.g. so that it leads to widely shared benefits)? For instance, do you think one should primarily focus on influencing private companies or governments? I'd be interested in learning more about the arguments for whatever conclusions you have. Thanks!

The boring answer is that there's a variety of relationships that need to be managed well in order for AGI deployment to go optimally. Comparative advantage and opportunity are probably good indicators of where the most fruitful work for any given individual is. That said, I think working with industry can be pretty highly leveraged since it's more nimble and easier to persuade than government IMO.

How do you think about the relationship between EA and electoral politics in e.g. the US and UK? Is engaging with it a good use of time (and what kinds of engagement), what research needs to be done, etc?

I actually don't think I have very good insights on this topic, despite spending a lot of my time on politics Twitter (despite my best judgment). I didn't have any particular experience in electoral politics and never really considered it as a career myself.

I guess one "take" would be that there's a lot of ways to improve the world via government that don't involve seeking elected office or getting heavily involved in politics, and so people should have a clear idea of why elected office is better than that.

All that said, my position is largely aligned with 80,000 Hours': from an expected value perspective it looks promising, but is obviously a low-probability route to impact.

I'd be interested to see more research into how constrained altruistic decisionmakers actually are. There are some theoretical reasons to suspect that decisionmakers are actually quite constrained, which, if true, would maybe suggest we're over-estimating how important it is to get altruistic decision-makers (or change our identification of which offices are most worth seeking).

For the average law school grad, what specific knowledge is most important to develop for working in AI Policy?

How to implement ML? A conceptual understanding of the history of ML? Math, like linear algebra? Coding or computer science more generally? Considerations around AI forecasting & AI risk? Current work on AI policy or technical safety? Histories of revolutionary technologies?

ML knowledge is good and important; I generally wish I had more of it and use many of my Learning Days to improve it. That link also shows some of the other, non-law subjects I've been studying.

In law school, I studied a lot of different subjects that have been useful, like:

  • Administrative law
  • National Security law
  • Constitutional law
  • Corporate law
  • Compliance
  • Contract law
  • Property law
  • Patent law
  • International law
  • Negotiations
  • Antitrust law

I am pretty bullish on most of the specific stuff you mentioned. I think macrohistory, history of technology, general public policy, forecasting, and economics are pretty useful. Unfortunately, it's such a weird and idiosyncratic field that there's not really a one-size-fits-all curriculum for getting into it, though this also means there's many productive ways to spend one's time preparing for it.

How much ML/CS knowledge is too much? For someone working in AI Policy, do you see diminishing returns to become a real expert in ML/CS, such that you could work directly as a technical person? Or is that level of expertise very useful for policy work?

Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have "enough" ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I've mentioned in these comments).

How useful is general CS knowledge vs ML knowledge specifically?

I would lean pretty heavily towards ML. Taking an intro to CS class is good background, but specialize other than that. Some adjacent areas, like cybersecurity, are good too.

(You could help AI development without specializing in AI, but this is specifically for AI Policy careers.)

If you go to a top law school, how difficult is it to work in AI Policy afterwards? Can virtually anyone with a top law degree get into the relevant roles? Or do you also need to distinguish yourself in other ways - technical understanding of AI, past policy research, etc?

Limiting the discussion to the most impactful jobs from an EA perspective, I think it can be pretty hard for reasons I lay out here. I got lucky in many many ways, including that I was accepted to 80K coaching, turned out to be good at this line of work (which I easily could not have), and was in law school during the time when FHI was just spinning up its GovAI internship program.

My guess is that general credentials are probably insufficient without accompanying work that shows your ability to address the very unique issues of AGI policy well. So opportunities to try your hand at that are pretty valuable if you can find them.

That said, opportunities to show general AI policy capabilities—even on "short-term" issues—are good signals and can lead to a good career in this area!

[anonymous]4y3
0
0

How much is a track record of conducting relevant research valued for getting a policy job at OpenAI? How does this compare to having (additional) prestigious qualifications and prestigious job experience?

I also wanted to pass along this explanation from our team manager, Jack Clark:

OpenAI’s policy team looks for candidates that display an ‘idiosyncratic specialism’ along with verifiable interest and intuitions regarding technical aspects of AI technology; members of OpenAI’s team currently have specialisms ranging from long-term TAI-oriented ethics, to geopolitics of compute, to issues of representation in generative models, to ‘red teaming’ technical systems from a security perspective, and so on. OpenAI hires people with a mixture of qualifications, and is equally happy hiring someone with no degrees and verifiable industry experience, as well as someone with a PHD. At OpenAI, technical familiarity is a prerequisite for successful policy work, as our policy team does a lot of work that involves embedding alongside technical teams on projects (see: our work throughout 2019 on GPT2).

I’m not involved in hiring at OpenAI, so I’m going to answer more in the spirit of “advice I would give for people interested in pursuing a career in EA AI policy generally.”

In short, I think actually trying your hand at the research is probably more valuable on the margin, especially if it yields high-quality research. (And if you discover it’s not a good fit, that’s valuable information as well.) This is basically what happened to me during my FHI internship: I found out that I was a good fit for this work, so I continued on in this path. There are a lot of very credentialed EAs, but (for better or worse), many EA AI policy careers take a combination of hard-to-describe and hard-to-measure skills that are best measured by actually trying to do it. Furthermore, there is unfortunately a managerial bottleneck in this space: there are far more people interested in entering it than people that can supervise potential entrants. I think it can be a frustrating space to enter; I got very lucky in many ways during my path here.

So, if you can’t actually try the research in a supervised setting, cultivating general skills or doing adjacent research (e.g., general AI policy) is a good step too. There are always skills I wish I had (and which I am fortunate to get to cultivate at OpenAI during Learning Day). Some of the stuff I studied during Learning Day which might guide your own skill cultivation include:

What are your high-level goals for improving AI law and policy? And how do you think your work at OpenAI contributes to those goals?

My approach is generally to identify relevant bodies of law that will affect the relationships between AI developers and other relevant entities/actors, like:

  1. other AI developers
  2. governments
  3. AI itself
  4. Consumers

Much of this is governed by well-developed areas of law, but in very unusual (hypothetical) cases. At OpenAI I look for edge cases in these areas. Specifically I collaborate with technical experts who are working on the cutting edge of AI R&D to identify these issues more clearly. OpenAI empowers me and the Policy team so that we can guide the org to proactively address these issues.

You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious -- what's your EA "origin story"? (How did you find out about effective altruism, how did you first become involved, etc.)

My EA origins story is pretty boring! I was a research assistant for a Philosophy professor who included a unit on EA in her Environmental Ethics course. That was my first exposure to the ideas of EA (although obviously I had exposure to Peter Singer previously). As a result, I added Doing Good Better to my reading list, and I read it in December 2016 (halfway through my first year of law school). I was pretty immediately convinced of its core ideas.

I then joined the Harvard Law School EA group, which was a really cool group at the time. In fact, it's somewhat weird that a school of HLS's size (ca. 1600 students) was able to sustain such a group, so I was very fortunate in that way.

That wasn't so boring.

[anonymous]4y1
0
0

Is there a meaningful distinction between policy practitioners and researchers at OpenAI?

I'm not actually sure what difference you're referring to. Could you please elaborate? :-)

[anonymous]4y3
0
0

Sorry, this was a little unclear! I was thinking of the distinction made here: https://80000hours.org/articles/ai-policy-guide/#ai-policy-researcher

Practitioner: implementing specific policies Researcher: working out which policies are desirable

Thanks! See if this post helps answer that! If not, feel free to follow-up!

More from Cullen
Curated and popular this week
Relevant opportunities