Hide table of contents

I'm a high-school student considering research careers. Academic research is an important field in most cause areas of EA. But there are some obstacles to being a researcher/professor: You may not be able to go in a EA -related company. So we need to work with non-EAs. Due to the topics what EA wants to research is probably not what companies/government want you to research. It's like the basic research in biology is more neglected than the clinical research. EA research may seems less attractive to non-EAs. I have some worries on this: 1.It may be harder for EAs to get a professor job in academics, so as promoting. If you don't have enough money, it'll be a big pressure. 2.Is it hard to apply enough funds for the EA topics, such as: animal welfare, space governance, AI safety/sentience. People may not want to invest onto this. How do we overcome these obstacles?

2

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

Actually, I think there are plenty of ways for academic researchers to find funding & support for investigating EA ideas!  Check out the 80,000 Hours job board filtered for entry-level and junior-level "research" careers.  We've got lots of options:

  • Working directly for an EA organization, like "Rethink Priorities" or the "Alignment Research Center".
  • Joining a think tank in Washington, DC like RAND Coporation, or the "Center for Strategic and International Studies", or the "American Enterprise Institute"
  • Working directly for the U.S., E.U., or U.K. government, as part of the "Congressional Research Service" or the Center for Disease Control, or lots of other roles.
  • Being an academic in a university, like at U Chicago's "Development Innovation Lab" or Georgetown's "Institute for National and Global Health"
  • Helping with research projects at a corporate lab, like the AI safety departments at Google DeepMind or Microsoft.
  • Working for generic large charitable foundations, like the Rockefeller Center, the Clinton Health Access Initiative, or the Bill & Melinda Gates Foundation.

It's true that (sadly!) most of these organizations aren't laser-focused on what EA considers to be the most important problems.  But these organizations often have a broad agenda to do lots of research within general areas (like "improving public health") that include highly-impactful research questions (like "What are the best ways to defend against a future pandemic?")  So, it's often possible for academic researchers like you to choose their detailed focus for maximum impact, and still get plenty of support and funding from big traditional organizations like governments, universities, and think tanks.

Thanks for your response very much, it helps me a lot. I'm not so familiar in academics. You said we can solve pandemic risks by working in public health areas. Then what non-EA academia departments give AI safety or space governance researchers chance to work in? Thus, will there be problems when researching unpopular/long-term topics, such as: not getting enough funds, paper not cited enough by non-EAs, hard to promote to professor...?

6
Jackson Wagner
1y
Just look at that link to the 80,000 Hours job board for more examples!  Some opportunities in AI: * Technical Safety Analyst, OpenAI, San Francisco Bay Area:  As a technical safety analyst, you will be responsible for discovering and mitigating new types of misuse, and scaling our detection techniques and processes. Platform Abuse is an especially exciting area since we believe most of the ways our technologies will be abused haven’t even been invented yet.  Salary: $135,000 - $220,000 * Research or Senior Fellow, CyberAI, Georgetown University, Center for Security and Emerging Technology, Washington, DC:  You will use a blend of analytic and technical skills to help lead and coordinate our research efforts at the intersection of cyber and AI. Tasks include shaping research priorities, conducting original research, overseeing the execution of the research and production of reports, and leading other team members. Specifically, this role will work on one or more of the following areas: Assessing how AI techniques might improve cybersecurity; Assessing how AI could alter future cyber operations and create new threats; the potential failure modes of related AI techniques... * Postdoc / Research Scientist, Language Model Alignment, New York University, Alignment Research Group: Your role in this position will be to lead or oversee collaborative research projects that are compatible with the goals of the Alignment Research Group. This will generally involve work on scalable oversight and related problems surrounding the reliable, safe, responsible use of future broadly-capable AI systems, with a focus on work with language models. You will have the flexibility to develop your own research agenda in this area with the help of more junior student and staff researchers.  Salary: $90,000 Maybe you are thinking that "unpopular" research means "controversial, despised, taboo, likely to get you cancelled or fired" -- like doing research into heated political topics??  But
Curated and popular this week
Relevant opportunities