Hide table of contents

Brief note about this post: I am a graduate student working near the area of quantum computing hardware. Recently, I have been trying to figure out what to do with my career, and came across this 80,000 Hours post that mentioned AI hardware. I figured I might be able to work in this area, so I’ve spent a little time (~100 hours) looking into this topic. This post is a summary of my initial takeaways from exploring this, as well as an open invitation to comment/critique/collaborate on my personal career plans.

Many thanks to Changyan Wang and for feedback on parts of this post and helpful edits from Malte Hendrickx and Eric Herboso. All remaining mistakes are my own.

0. Introduction

I first came across the idea of working on AI hardware from 80,000 Hours (80k), from their post “Some promising career ideas beyond 80,000 Hours' priority paths”, where they offer a few reasons to go into AI hardware:

“Some ways hardware experts may be able to help positively shape the development of AI include:

  • More accurately forecasting progress in the capabilities of AI systems, for which hardware is a key and relatively quantifiable input.
  • Advising policymakers on hardware issues, such as export, import, and manufacturing policies for specialized chips. (Read a relevant issue brief from CSET.)
  • Helping AI projects in making credible commitments by allowing them to verifiably demonstrate the computational resources they’re using.
  • Helping advise and fulfill the hardware needs for safety-oriented AI labs.”

In sections 1–4, I will try to give my understanding of each of these ideas a little more deeply and speculate on what sort of career path may lead in that direction (one section corresponding to each of the four points above). In section 5, I will then try to summarize the types of careers that could work on these problems. In section 6, I will discuss some small tests one could perform to check to try out these different careers. I will then finish up in section 7 with my current thinking about my own career plans in light of this. 

Note also the below advice from the 80k post (emphasis my own):

If you do take this path, we encourage you to think carefully through the implications of your plans, ideally in collaboration with strategy and policy experts also focused on creating safe and beneficial AI.”

I have not done this careful thinking, and would love to collaborate with strategy and policy experts.

1. Forecasting

Discussion of topic

A classic example of a forecast in the space of computer hardware is Moore’s Law, which predicted that the number of transistors on a computer chip would double every two years. One reason EA might be interested in hardware trends like this is for the purpose of forecasting AI timelines. I think the most comprehensive forecasting in this space is being done by Ajeya Cotra at the Open Philanthropy Project. Her report is the culmination of a number of detailed forecasts, including how the price of computer power will change over time. A forecast of the cost of computer power, in turn, requires a forecast of the cost and abilities of AI hardware. As described in section 4, there is increasing investment in innovative technologies for AI hardware, so the most detailed forecasts in AI hardware might require more than an extrapolation of Moore’s Law. (Also, for discussion of the forecasting being done at OpenAI, see, for instance, Danny Hernandez’s podcast with 80k and the links in the show notes)

At first glance, it seemed to me that the existence of Ajeya’s report demonstrates that the EA community already has enough people with sufficient knowledge and access to expert opinion that, on the margin, adding one expert in hardware to the EA community wouldn’t improve these forecasts much. I think an argument against this initial reaction is that subject matter experts can probably have a better understanding of blind spots and an intuition about unknown unknowns. Indeed, in his 80k podcast, Danny Hernandez says “the kind of person who I’d be most interested in trying to make good forecasts about Moore’s law and other trends, is somebody who has been building chips for a while or has worked in building chips for a while. I think there aren’t that many of those people.

Career paths

Some examples of career paths in computer hardware that would work toward forecasting:

  • Working on broad forecasts (like Ajeya’s) as a superforecaster. It seems that at least Open Philanthropy Project and OpenAI employ people working on this, and I think some of the policy-focused organizations (discussed in the next section) are interested in this type of work. I think there are several paths that lead here, though some prerequisites may be (1) having enough experience in the fields of hardware and AI to avoid blind spots and know who the subject matter experts are, and (2) experience as a forecaster.
  • Being a subject matter expert on narrow topics in AI hardware trends. Ideally, this would be someone on the very cutting edge of AI hardware, which I think would include professors and more senior staff at companies like NVIDIA.

2. Policy

Discussion of topic

This topic was also touched on in the podcast with Danny Hernandez, where he spoke about how experts in hardware could influence the safe development of of AI, stating:

Trying to work with governments or other sorts of bodies that might be able to regulate AI hardware or perhaps create the kinds of incentives that would make an advance at the right times and the right places… it’d be reasonable to try starting now with that kind of thing in mind. But that’s pretty speculative. I know less about that than the forecasting type thing.”

An example of the interplay between AI hardware and policy is the brief from the Center for Security and Emerging Technology (CSET) referenced in the 80k  post from section 0. This brief builds the case why AI hardware has unique instrumental value in the AI policy space and how to use it. Unlike software, which is decentralized and hard to regulate, the equipment to make the most advanced computer chips is much more centralized. Therefore, carefully crafted policy can regulate the distribution of AI hardware, providing a leverage point to regulate the development of AI more generally. The brief utilizes a relatively deep understanding of the state-of-the-art in AI hardware, identifying exactly which companies would need to be involved, and making recommendations on what class of equipment to target.

A series of Future Perfect newsletters (including Nov 13; Nov 20; and especially Dec 04, 2020 [Edit Jun 2023: Removing links feel free to DM me for copies of the newsletters]) outlines a case that there is some long-hanging fruit in enacting effective policy in Washington, DC.  So, I am cautiously optimistic that people interested in hardware policy can do a lot of good in this space (for further discussion of this, see section 5.)

Career paths

  • 80k has a lot of advice on AI policy on their AI Policy priority path writeup, where they mention careers paths including working at top AI labs, joining a think tank, working for the US government, working in academia, or working in party politics.
    • My understanding is that one way to slice the space of careers in policy is between government roles and non-government roles. Government roles are closer to where the decisions are being made, but are also more well-suited to certain backgrounds than others.
  • Danny gives one picture of a career path in this area. He discussed how at some places, like OpenAI, you can build a relationship with the company by reaching out to a current employee to build a relationship with them as your informal mentor, and then eventually converting that relationship into a job offer. Since this may not be feasible in some policy roles, one path he described was: “So you just apply to the informal places first and you walk up the chain. Sometimes there’s a way to get some minimum credential. I think like a public policy masters or something is kind of one way where people get a credential quite quickly that makes them seem reasonable. So it’s like you could be somebody that has one of those and has a background in hardware and then all of a sudden you’re like one of the most credentialed people there is. It could happen pretty quickly

3. Hardware Security and Increased Coordination

Discussion of topic

These ideas were also discussed in the 80k podcast with Danny Hernandez. I think the reason these two topics are lumped together is that, if you want to improve coordination, one likely necessary condition is being able to trust each other, and security guarantees are one way to build trust. There is a paper Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims that fleshes out 10 mechanisms to implement toward this end (very brief summary from OpenAI here). The three mechanisms they report under the heading of hardware (and their proposed actions) are:

  • Secure hardware for machine learning
    • Proposed action: Industry and academia should work together to develop hardware security features for AI accelerators or otherwise establish best practices for the use of secure hardware (including secure enclaves on commodity hardware) in machine learning contexts.
  • High-precision compute measurement
    • Proposed action: One or more AI labs should estimate the computing power involved in a single project in great detail and report on lessons learned regarding the potential for wider adoption of such methods.
  • Computing power support for academia
    • Proposed action: Government funding bodies should substantially increase funding for computing power resources for researchers in academia, in order to improve the ability of those researchers to verify claims made by industry.

I don’t know much about security myself, but the topic of at least software security is covered in the 80k podcast with Bruce Schneier and this forum post.

The other type of increased coordination brought up in the podcast with Danny is trying to get big companies to sign up for the Windfall Clause. The motivation behind the Windfall Clause is to “address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable.” The proposed solution to this brought up in the FHI document is “an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits.” One example of this type of commitment is the OpenAI LP. It seems there also could be a winner-takes-all competition for the company making the chips that create transformative AI, so AI hardware companies are likely an effective place to target this type of policy.

Career paths

  • For career paths in hardware security, I think there is some mainstream research being done in this area, but I’m not sure if it is the type of research that would address the mechanisms in the above paper. I would love to learn more about the state of this field from someone with more experience.  Regarding careers that would give you influence of Windfall Clause type coordination, in the 80k podcast Danny suggests the way to go about having this kind of influence is to be a founder/early employee at a startup, or to at least have a close relationship with an executive.

4. Advising and Fulfilling Hardware Needs

Discussion of topic

I’m not sure if 80k has expanded on this point anywhere else, but I think one reasonable interpretation is that this would be something like working in industry and being a contact point for the EA community and organizations like OpenAI. If anyone did want to contact an expert, people in the EA community generally know your name and can direct others toward you.

I think someone in this role could also be proactive about keeping EA organizations up to date about the state-of-the-art.  Further, as we gain a better understanding of how EAs can most effectively influence the development of AI, it seems reasonable that there will be an increased utility to have EAs working directly on AI hardware. 

There is a large list of established companies and startups in the AI hardware space on James Wang’s twitter account. Note that some of the companies on this list are working on technologies that are unlike what’s worked on in mainstream computer architecture. Some of the new types of hardware that I’ve heard about are:

  • Photonics: It’s hard to get an idea exactly the state of the technology in industry in this field since there are a lot of secretive start-ups, but there’s the sales pitch in this video. I have compiled a list of the companies I know if working in photonics chips for AI here [Edit June 2023: removing the list because it's quite out of date].
  • Quantum computing: There is a significant academic research effort in this area as well as a growing list of companies [Edit June 2023: Changing link to Wikipedia page]working toward commercial devices. Note there are some reasons one may be skeptical of the contribution of quantum computing to AI Alignment in general, see this post from Jaime Sevilla and links within. (I don’t know enough to have a strong opinion personally.)
  • Other technologies beyond the current industry standards that are cataloged in the IEEE IRDS (though I don’t know which of the many technologies listed are as relevant to AI hardware as the two listed above)

Note, as discussed in the podcast with Danny, there could also be risks associated with working directly on AI hardware, for instance it could just accelerate AI timelines without making anything safer.

Career paths

  • I think this would involve working at one of the industries like those listed above and maintaining involvement in the EA community. I think the most clear path to working on AI hardware would be gaining experience in computer architecture, but many of the different technologies could be approached from different directions.

5. Some Example Career Paths

Given the problems in AI Hardware listed above, here are some career paths I think one could take to work on them. When possible, I’ll try to highlight a person that has actually been in this role, I would love to hear more examples of possible role models.

  • University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. I would love to learn about what other areas are important and who the leaders are in all these areas.
  • Academia working on AI Policy and Strategy. 80k has a lot of resources on this career path here. Also see the 80k podcast with Allan Dafoe.
  • Government research in places like IARPA. Jason Matheny was the head of IARPA and explains some of the roles in this space in this talk (note, some of these roles can be a <5 year tour of duty as part of a completely different career, and still have a really high impact).
  • Think tanks like CSET: My impression is that this is mostly policy focused and has less of a technical focus compared to IARPA. See, e.g., the 80k podcast with Helen Toner. My impression is that many roles in think tanks are not designed to be career-long roles, but jumping off points to careers in other roles in government.
  • Office of Science and Technology Policy (OSTP): I think of two different approaches here: 
    • As described in the 80k podcast with Tom Kalil, one way to get into a highly influential organization like this is to work as an adviser to politicians.
    • Another example of this type of career is that of the physicist Jake Taylor. My understanding is that he took a sort of “tour of duty” type of role at the OSTP, where he was a big factor in the White House’s increased interest in quantum computing. This resulted in the billion dollar National Quantum Initiative (NQI). While the direct analogy of this for AI was passed in the FY21 NDAA, I think this still highlights the amount of impact one can have pushing an idea forward.
    • Industry: See section 4 for a list of possible companies to work at.
    • Startups may be especially interesting because of the coordination aspects discussed in section 3. Anyone interested in going the startup route may want to pick a grad/undergrad program in a department that has especially good resources for entrepreneurship. 
  • Forecasting at an organization like Open Philanthropy Project or OpenAI to influence funding and policy (similar to Ajeya Cotra and Danny Hernandez, as described in section 1).

6. Some Small Tests in this Area

Some ways to make small tests in the technical side of things:

Some ways to make small tests in the non-technical side of things include:

7. My career plans

Given this information, here’s how I have been thinking about what I should try with my career plans. Critical comments are especially welcome on this, also open to DMs.

First, I plan to do more exploration before I graduate (planned in spring 2022) by

  • Gaining experience in photonics from the edX course mentioned above in spring 2021
  • Expanding my experience with real AI hardware doing an internship in summer 2021
  • Gaining experience in tech policy by joining a reading group
  • Applying for the AAAS STPF during June–November 2021, (for the positions starting in September 2022) and, if accepted, do that after graduation.

These experiences will probably update my thoughts on my career significantly. Specifically, I think my experience in the STPF (including not being accepted) would update me significantly about my comparative advantage for policy. However, following from the 80k career guide, with my current experience here are my plans A/B/Z:

  • Plan A: Try for a “Jake Taylor” type career, staying involved with technical research but take “tour of duty” roles in government. I think one possible path would be to gain experience in industry after grad school in either photonics or quantum computing. After, say, five years, apply to be a program manager at DARPA or IARPA.
  • Plan B: If straddling tech and policy is untenable, stick to the government/policy side, and try for a “Jason Matheny” type career (who is the former director of IARPA and the current founding director of CSET). 
  • Plan Z: Apply to industry/national lab jobs in quantum computing, and re-evaluate how I will have my impact guided by section 1, 3, and/or 4.

I would also be interested if anyone has opinions about whether academic roles might be more impactful than roles in industry or government. As I see it, the main reason to go into academia is an argument of comparative advantage, but it seems to me that it may give no more opportunities to do good than a role in industry or government.

Comments10
Sorted by Click to highlight new comments since: Today at 1:34 PM

Hi-- great post! I was pointed to this because I've been working on a variety of hardware-related projects at FHI and AI Impacts, including generating better hardware forecasts. (I wrote a lot here, but would also be excited to talk to you directly and have even more to say-- I contacted you through Facebook.)

 At first glance, it seemed to me that the existence of Ajeya’s report demonstrates that the EA community already has enough people with sufficient knowledge and access to expert opinion that, on the margin, adding one expert in hardware to the EA community wouldn’t improve these forecasts much.

I think this isn't true.

For one, I think while the forecasts in that report are the best publicly available thing we have, there's significant room to do better, e.g.

  • The forecasts rely on data for the sale price of hardware along with their reported FLOPS performance. But the sale price is only one component of the costs to run hardware and doesn't include power, data center costs, storage and networking etc. Arguably, we also care about the price performance for large hardware producers (e.g. Google) more than hardware consumers, and the sale price won't necessarily be reflective of that since it includes a significant mark-up over the cost of manufacture.
  • The forecasts don't consider existing forecasts from e.g. the IRDS that you mention, which are actually very pessimistic about the scaling of energy costs for CMOS chips over the next 15 years. (Of course, this doesn't preclude better scaling through switching to other technology).
  • If I recall correctly, the report partially justifies its estimate by guessing that even if chip design improvements bottom out, improvements in manufacturing cost and chip lifetime might still create a relatively steady rate of progress. I think this requires some assumptions about the cost model that may not be true, though I haven't done enough investigation yet to be sure.

(This isn't to disparage the report -- I think it's an awesome report and the current estimate is a great starting point, and Ajeya very explicitly disclaims that these are the forecasts most likely to be knowably mistaken.)

As a side note, I think EAs tend to misuse and misunderstand Moore's Law in general. As you say, Moore's Law says that the number of transistors on a chip doubles every two years. This has remained true historically, but is only dubiously correlated with 'price performance Moore's Law'-- a doubling of price performance every two years. As I note above, I think the data publicly collected on price performance is poor, partially because the 'price' and 'performance' of hardware is trickier to define than it looks. But e.g. this recent paper estimates that the price performance of at least universal processors has slowed considerably in recent years (the paper estimates 8% improvement in performance-per-dollar annually from 2008 - 2013, see section 4.3.2 'Current state of performance-per-dollar of universal processors'). Even if price performance Moore's Law ever held true, it's really not clear that it holds now.

For two, I think it's not the case that we have access to enough people with sufficient knowledge and expert opinion. I've been really interested in talking to hardware experts, and I think I would selfishly benefit substantially from experts who had thought more about "the big picture" or more speculative hardware possibilities (most people I talk to have domain expertise in something very specific and near-term). I've also found it difficult to get a lot of people's time, and would selfishly benefit from having access more hardware experts that were explicitly longtermist-aligned and excited to give me more of it. :) Basically, I'd be very in favor of having more people in industry available as advisors, as you suggest.

You also touch on this some, but I will say that I do think now is actually a particularly impactful time to influence policy on the company-level (in addition to in government, which seems to be implementing a slew of new semiconductor legislation and seems increasingly interested in regulating hardware companies.) A  recent report estimates that ASICs are poised to take over 50% of the hardware market in the coming years, and most ASIC companies now are small start-ups-- I think there's a case that influencing the policy and ethics of these small companies is much more tractable than their larger counterparts, and it would be worth someone thinking carefully about how to do that. Working as an early employee seems like a good potential way.

Lastly, I will say that I think there might be valuable work to be done at the intersection of hardware and economics-- for an example, see again this paper. I think things like understanding models of hardware costs, the overall hardware market, cloud computing, etc. are not well-encapsulated by the kind of understanding technical experts tend to have and is valuable for the longtermist community to have access to. (This is also some of what I've been working on locally.)

Toph
3y12
0
0

Thank you so much for the detailed comment! I would be very excited to chat offline, but I'll put a few questions here that are directly related to the comment:

For one, I think while the forecasts in that report are the best publicly available thing we have, there's significant room to do better

These are all super interesting points! One thing that strikes me as a similarity between many of them is that it's not straightforward what metric (# transistors per chip, price performance, etc.) is the most useful to forecast AI timelines. Do you think price performance for certain applications could be one of the better ones to use on its own? Or is it perhaps better practice to keep an index of some number of trends?

I think it's not the case that we have access to enough people with sufficient knowledge and expert opinion. I've been really interested in talking to hardware experts, and I think I would selfishly benefit substantially from experts who had thought more about "the big picture" or more speculative hardware possibilities

Are there any specific speculative hardware areas you think may be neglected? I mentioned photonics and quantum computing in the post because these are the only ones I've spent more than an hour thinking about. I vaguely plan to read up on the other technologies in the IRDS, but if there are some that might be worth looking more into than others (or some that didn't make their list at all!) that would help focus this plan significantly.

A  recent report estimates that ASICs are poised to take over 50% of the hardware market in the coming years

Thank you for pointing this out! I talked with someone working in hardware that gave me the opposite impression and I haven't thought to actually look into this myself. (In retrospect this may have been their sales pitch to differentiate themselves from their competitors. FWIW, I think their argument was that AI moves fast enough that an ASIC start-up will be irrelevant before their product hits the market.) I look forward to updating this impression!

I think things like understanding models of hardware costs, the overall hardware market, cloud computing, etc. are not well-encapsulated by the kind of understanding technical experts tend to have.

I would naively think this would be another point in favor of working at start-ups compared to more established companies. My impression is that start-ups have to spend more time thinking carefully about their market is in order to attract funding (and the small size means technical people are more involved with this thinking). Does that seem reasonable?

Do you think price performance for certain applications could be one of the better ones to use on its own? Or is it perhaps better practice to keep an index of some number of trends?

 

I think price performance, measured in something like "operations / $", is by far the most important metric, caveating that by itself it doesn't differentiate between one-time costs of design and purchase and ongoing costs to run hardware, and it doesn't account for limitations in memory, networking, and software for parallelization that constrain performance as the number of chips are scaled up.

Are there any specific speculative hardware areas you think may be neglected? I mentioned photonics and quantum computing in the post because these are the only ones I've spent more than an hour thinking about. I vaguely plan to read up on the other technologies in the IRDS, but if there are some that might be worth looking more into than others (or some that didn't make their list at all!) that would help focus this plan significantly.

There has been a lot of recent work in optical chips / photonics, so I've been following them closely-- I have my own notes on publicly available info here. I think quantum computing is likely further from viability but good to pay attention to. I also think it's worth understanding the likelihood and implications of 3D CMOS chips, just because at least IRDS predictions suggest that might be the way forward in the next decade (I think these are much less speculative than the two above). I haven't looked into this as much as I'd like, though-- I actually also have on my todo list to read through the IRDS list and identify the things that are most likely and have the highest upside. Maybe we can compare notes. :)

I would naively think this would be another point in favor of working at start-ups compared to more established companies. My impression is that start-ups have to spend more time thinking carefully about their market is in order to attract funding (and the small size means technical people are more involved with this thinking). Does that seem reasonable?

I suspect in most roles in either a start-up or a large company you'll be quite focused on the tech and not very focused on the market or the cost model-- I don't think this strongly favors working for a start-up.

Just a little thing, but my impression is that CPUs and GPUs and FPGAs and analog chips and neuromorphic chips and photonic chips all overlap with each other quite a bit in the technologies involved (e.g. cleanroom photolithography), as compared to quantum computing which is way off in its own universe of design and build and test and simulation tools (well, several universes, depending on the approach). I could be wrong, and you would probably know better than me. (I'm a bit hazy on everything that goes into a "real" large-scale quantum computer, as opposed to 2-qubit lab demos.) But if that's right, it would argue against investing your time in quantum computing, other things equal. For my part, I would put like <10% chance that the quantum computing universe is the one that will create AGI hardware and >90% that the CPU/GPU/neuromorphic/photonic/analog/etc universe will. But who knows, I guess.

Thank you so much for taking the time to publicly write this up! I'd love it if more people who are doing some research for their career planning would post their results.

I'm a physicist at a US defense contractor, I've worked on various photonic chip projects and neuromorphic chip projects and quantum projects and projects involving custom ASICs among many other things, and I blog about safe & beneficial AGI as a hobby ... I'm happy to chat if you think that might help, you can DM me :-)

So cool to see such a thoughtful and clear writeup of your investigation! Also nice for me since I was involved in creating them to see that 80k's post and podcast seemed to be helpful.

I think [advising on hardware] would involve working at one of the industries like those listed above and maintaining involvement in the EA community.

What I know about this topic is mostly exhausted by the resources you've seen, but for what it's worth I think this could also be directed at making sure that AI companies that are really heavily prioritising safety are able to meet their hardware needs. In other words, depending on the companies it could make sense to advise industry in addition to the EA community.

University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. Industry: See section 4 for a list of possible companies to work at.

For these two career ideas I'd just add -- what is implicit here I think but maybe worth making explicit -- that it'd be important to be highly selective and pretty canny about what research topics/companies you work with in order to specifically help AI be safer and more beneficial.

These experiences will probably update my thoughts on my career significantly.

Seems right - and if you were to write an update at that point I'd be interested to read it!

Thanks for this article! So helpful.

I’m a junior electrical engineering major who just learned about the advanced AI risks. I plan to do two senior years and a masters degree so I have time to change direction if I need to.

Do you think it would help the world more to go into one of the career paths in this article, or another career path one could do with an electrical engineering major that is less related to AI?

What are some of the career paths I could take with an electrical engineering major that have less risk of contributing to AI happening faster than we are able to handle it? Of those career paths, are there any that stand out as particularly helpful to society? I’m not interested in careers that require a PhD.

Thanks, so glad to see another engineer here! I'll put down some rough ideas here, but if you're interested in chatting sometime I'd be very happy to go into more detail, please feel free to reach out via DM!

I'm pretty uncertain about which of the paths listed has the potential to be the most effective (or that the most effective path is even on the list!). I would think that comparative advantage would play an important role here. I think Arden's point is a very good one, that it's important to be selective about what to work on. My (very inexperienced) intuition is that, to be very selective in many jobs related to computer hardware, one needs to really be a standout candidate, and to do that it probably has to be a topic that one finds really motivating.

If I had to make a bet on just one path independent of comparative advantage, I'd lean toward hardware security for AI. Part of this is that it touches many other paths (it seems like the type of area that's forward-looking in AI hardware, and quite relevant to policy.). Another part is the point you brought up, this seems less likely to speed up timelines without increasing safety. I'm not really sure how having a masters vs. PhD would change any of this.

Thinking about other career paths less related to AI, if you're interested more in the bio/materials side of EE, I've looked into atomically precise manufacturing a little bit  (which was mentioned in this other post from 80000 Hours on the forum). It seems like a very interesting topic, but my impression was that (1) it's not clear exactly what an EA should want to do in this space (though people are actively thinking about this!), and (2) if you want to go into this as an engineer you'd need to put a lot of work into building it up as a field.

Thanks so much! So helpful. I just connected with you on LinkedIn, but it wouldn’t let me include an introduction message for some reason, so here’s a reminder of how we met.

More from Toph
Curated and popular this week
Relevant opportunities