Aidan O'Gara

Comments

My Career Decision-Making Process

Thanks, that makes sense. Freelancing in software development and tech seems to me like a reasonable path to a well-paid part-time gig for many people. I wonder what other industries or backgrounds lend themselves towards these kinds of jobs.

While this is fascinating, I’d be most interested in your views on AI for Good, healthcare, and the intersection between the two, as potential EA cause areas.

Your views, as I understand them (and please correct me where I’m wrong): You see opportunity for impact in applying AI and ML techniques to solve real-world problems. Examples include forecasting floods and earthquakes, or analyzing digital data on health outcomes. You’re concerned that there might already be enough talented people working on the most impactful projects, thereby reducing your counterfactual impact, but you see opportunities for outsize impact when working on a particularly important problem or making a large counterfactual contribution as an entrepreneur.

Without having done a fraction of the research you clearly have, I’m hopeful that you’re right about health. Anti-aging research and pandemic preparedness seem to be driving EA interest into healthcare and medicine more broadly, and I’m wondering if more mainstream careers in medical research and public health might be potentially quite impactful, if only from a near-term perspective. Would be interested in your thoughts on which problems are high impact, how to identify impactful opportunities when you see them, and perhaps the overall potential of the field for EA — as well as anything anyone else has written on these topics.

AI for Good seems like a robustly good career path in many ways, especially for someone interested in AI Safety (which, as you note, you are not). Your direct impact could be anywhere from “providing a good product to paying customers” to “solving the world’s most pressing problems with ML.” You can make a good amount of money and donate a fraction of it. You’ll meet an ambitious network of people, learn the soft skills of business, and receive a widely respected credential — valuable capital for any career. Crucial from my perspective, you’d learn how to develop and deploy AI in the real-world, which I think could be very helpful when transitioning to a career in AI technical safety research or AI policy. (AI Safety people, agree or disagree that this experience would be useful?)

Do you have further thoughts about how do have an impactful career doing AI for Good? Where are the highest impact positions? How do you enter the field, what qualifications and skills do you need? How can someone judge for themselves the opportunity for impact in a particular role?

Thank you! It’s inspiring and informative to see someone doing such thorough and independent cause prioritization research for their own career.

My Career Decision-Making Process

Really great post, thank you! You discuss the possibility of "part-time earning to give while simultaneously running side projects" and note that you've chosen to work part-time on a PhD in Computational Healthcare while also working a separate part-time job for earning to give. 

Part-time earning to give seems like an interesting possibility I hadn't considered before, mainly because I assumed there are very few part-time jobs that pay well. What has been your experience here? Do you have a unique opportunity that allows you to earn a lot part-time? Perhaps you've worked as an  consultant or independent contractor who sets their own hours? What jobs have you considered here? More broadly, would you expect most college-educated people to be able to find part-time work that pays proportionally as well as what they'd earn working full-time? (Not looking for any definitive conclusion on the topic, just your off-the-cuff impressions)

Thanks again, and good luck with your new career plans!

Aidan O'Gara's Shortform

Three Scenarios for AI Progress

How will AI develop over the next few centuries? Three scenarios seem particularly likely to me: 

  • "Solving Intelligence": Within the next 50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system, by using massive compute within our current ML paradigm.
  • "Comprehensive AI Systems": Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop supervision, but soon enough we hit annual GDP growth of 25%.
  • "No takeoff": Looks qualitatively similar to the above, except growth remains steady around 2% for at least several centuries. We remain in the economic paradigm of the Industrial Revolution, and AI makes an economic contribution similar to that of electricity or oil without launching us into a new period of human history. Progress continues as usual.

For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:

  • When do we achieve TAI? AGI? Superintelligence? How fast is takeoff? Who builds it? How much compute does it require? How much does that cost? Agent or Tool? Is machine learning the paradigm, or do we have another fundamental shift in research direction? What are the key AI Safety challenges? Who is best positioned to contribute?

The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each. 

Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]

  • Solving Intelligence: Within the next 20-50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system.
    • Machine learning is the paradigm that brings us to superintelligence. Most progress is driven by compute. Our algorithms are similar to the human brain, and therefore require similar amounts of compute.
    • It becomes a compute war. You're taking the same fundamental algorithms and spending a hundred billion dollars on compute, and it works. (Informed by Ajeya's report, IMO the most important upshot of which being that spending a truly massive amount of money can cover a sizeable portion of the difference between our current compute and the compute of the human brain. If human brain-level compute is an important threshold, then the few actors who could  spend $100B+ are have an advantage of decades against against actors who can only spend millions. Would like to discuss this further.)
    • This is most definitely not CAIS. There would be one or two or ten superintelligent AI systems,  but not two million.
    • Very few people  can contribute effectively to AI Safety, because to contribute effectively you have to be at one of only a handful of organizations in the world. You need to be in "the room where it happens", whether that's the AI lab developing the superintelligence or the government attempting to monitor the project. The handful of people who can contribute  are incredibly valuable.
    • What AI safety stuff matters?
      • Technical AI safety research. The people right now who are building AI that scales safely. It turns out you can do effective research now because our current methods are the methods that bring us to superintelligence, and whether or not our current research is good enough determines whether or not we survive.
      • Highest levels of government, for their ability to regulate AI labs. A project like this could be nationalized, or carried out under strict oversight from government regulators. Realistically I'd expect the opposite, that governments would be too slow to see the risks and rewards in such a technical domain.
      • People who imagine long-term policies for governing AI. I don't know how much  useful work exists here, but I have to imagine there's some good stuff about how to run the world undersuperintelligence. What's the game theory of multipolar scenarios? What are the points of decisive strategic advantage?
  • Comprehensive AI Systems: Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop supervision, but soon enough we hit annual GDP growth of 25%.
    • Governments go about international relations the same as usual, just with better weapons. There's some strategic effects of this that Henry Kissinger and Justin Ding understand quite well, but there's no instant collapse into one world government or anything. There's a few outside risks here that would be terrible (a new WMD, or missile defense systems that threaten MAD), but basically we just get killer robots, which will probably be fine.
      • Killer robots are a key AI safety training ground. If they're inevitable, we should be integrated within enemy lines in order to deploy safely.
    • We have lots of warning shots.
    • What are the existential risks? Nuclear war. Autonomous weapons accidents, which I suppose could turn out to be existential?? Long-term misalignment: over the next 300 years, we hand off the fate of the universe to the robots, and it's not quite the right trajectory.
    • What AI Safety work is most valuable?
      • Run-of-the-mill AI Policy work. Accomplishing normal government objectives often unrelated to existential risk specifically, by driving forward AI progress in a technically-literate and altruistically-thoughtful way.
      • Driving forward AI progress. It's a valuable technology that will help lots of people, and accelerating its arrival is a good thing.
        • With particular attention to safety. Building a CS culture, a Silicon Valley, a regulatory environment, and international cooperation that will sustain the three hundred year transition.
      • Working in military AI systems. They're the killer robots most likely to run amok and kill some people (or 7 billion). Malfunctioning AI can also cause nuclear war by setting off geopolitical conflict. Also new WMDs would be terrible.
  • No takeoff: Looks qualitatively similar to the above, except growth remains steady around 2% for at least several centuries. We remain in the economic paradigm of the Industrial Revolution, and AI makes an economic contribution similar to that of electricity or oil without launching us into a new period of human history.
    • This seems entirely possible, maybe even the most likely outcome. I've been surrounded by people talking about short timelines from a pretty young age so I never really thought about this possibility, but "takeoff" is not guaranteed. The world in 500 years could resemble the world today; in fact, I'd guess most thoughtful people don't think much about transformative AI and would assume that this is the default scenario.
    • Part of why I think this is entirely plausible is because I don't see many independently strong arguments for short AI timelines:
      • IMO the strongest argument for short timelines is that, within the next few decades, we'll cross the threshold for using more compute than the human brain. If this turns out to be a significant threshold and a fair milestone to anchor against, then we could hit an inflection point and rapidly see Bostrom Superintelligence-type scenarios.
        • I see this belief as closely associated with the entire first scenario described above: Held by OpenAI/DeepMind, the idea that we will "solve intelligence" with an agenty AI running a simple fundamental algorithm with massive compute and effectively generalizing across many domains.
      • IIRC, the most prominent early argument for short AI timelines, as discussed by Bostrom, Yudkowsky, and others, was recursive self-improvement. The AI will build smarter AIs, meaning we'll eventually hit an inflection point of runaway improvement positively feeding into itself and rapidly escalating from near-human to lightyears-beyond-human intelligence. This argument seems less popular in recent years, though I couldn't say exactly why. My only opinion would be that this seems more like an argument for "fast takeoff" (once we have near-human level AI systems for building AI systems, we'll quickly achieve superhuman performance in that area), but does not tell you when that takeoff will occur. For all we know, this fast takeoff could happen in hundreds of years. (Or I could be misunderstanding the argument here, I'd like to think more about it.)
      • Surveys asking AI researchers when they expect superhuman AI have received lots of popular coverage and might be driving widespread acceptance of short timelines. My very subjective and underinformed intution puts little weight on these surveys compared to the object level arguments. The fact that people trying to build superintelligence believe it's possible within their lifetime certainly makes me take that possibility seriously, but it doesn't provide much of an upper bound on how long it might take. If the current consensus of AI researchers proves to be wrong about progress over the next century, I wouldn't expect their beliefs about the next five or ten centuries to hold up - the worldview assumptions might just be entirely off-base.
      • These are the only three arguments for short timelines I've ever heard and remembered. Interested if I'm forgetting anything big here.
      • Compare this to the simple prior that history will continue with slow and steady single-digit growth as it has since the Industrial Revolution, and I see a significant chance that we don't see AI takeoff for centuries, if ever. (That's before considering object level arguments for longer timelines, which admittedly I don't see many of, and therefore I don't put much weight on.)
    • I haven't fully thought through all of this, but would love to hear others thoughts on the probability of "no takeoff".

This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios. 

Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.

Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain

This is really persuasive to me, thanks for posting. Previously I’d heard arguments anchoring AGI timelines to the amount of compute used by the human brain, but I didn’t see much reason at all for our algorithms to use the same amount of compute as the brain. But you point to the example of flight, where all the tricky issues of how to get something to fly were quickly solved almost as soon as we built engines as powerful as birds. Now I’m wondering if this is a pattern we’ve seen many times — if so, I’d be much more open to anchoring AI timelines on the amount of compute used by the human brain (which would mean significantly shorter timelines than I’d currently expect).

So my question going forward would be: What other machines have humans built to mimic the functionality of living organisms? In these cases, do we see a single factor driving most progress, like engine power or computing power? If so, do machines perform as well as living organisms with similar levels of this key variable? Or, does the human breakthrough to performing on-par with evolution come at a more random point, driven primarily by one-off insights or by a bunch of non-obvious variables?

Within AI, you could examine how much compute it took to mimic certain functions of organic brains. How much compute does it take to build human-level speech recognition or image classification, and how does that compare to the compute used in the corresponding areas of the human brain? (Joseph Carlsmith’s OpenPhil investigation of human level compute covered similar territory and might be helpful here, but I haven’t gone through it in enough detail to know.)

Does transportation offer other examples? Analogues between boats and fish? Land travel and fast mammals?

I’m having trouble thinking of good analogues, but I’m guessing they have to exist. AI Impacts’ discontinuities investigation feels like a similar type of question about examples of historical technological progress, and it seems to have proven tractable to research and useful once answered. I’d be very interested in further research in this vein — anchoring AGI timelines to human compute estimates seems to me like the best argument (even the only good argument?) for short timelines, and this post alone makes those arguments much more convincing to me.

AMA: Elizabeth Edwards-Appell, former State Representative

What impact do you think you were able to have as a State Rep? Are there any specific projects or policies you’re particularly proud of?

2020 AI Alignment Literature Review and Charity Comparison

Yes, looks like LTFF  is also looking for funding. Edited, thanks. 

2020 AI Alignment Literature Review and Charity Comparison

Fascinating that very few top AI Safety organizations are looking for more funding. By my count, only 4 of these 17 organizations are even publicly requesting donations this year: three independent research groups (GCRI, CLR, and AI Impacts) and an operations org (BERI).  Across the board, it doesn't seem like AI Safety is very funding constrained. 

Based on this report, I think the best donation opportunity among these orgs is BERI, the Berkeley Existential Risk Initiative. Larks says that BERI "provides support to existential risk groups at top universities to facilitate activities (like hiring engineers and assistants) that would be hard within the university context."  According to BERI's blog post requesting donations, this support includes:

  • $250k to hire contracted researchers and research assistants for university and independent research groups.
  • $170k for additional support: productivity coaches, software engineers, copy editors, graphic designers, and other specialty services.
  • Continued to employ two machine learning research engineers to work alongside researchers at CHAI.
  • Hired Robert Trager and Joslyn Barnhart to work as Visiting Senior Research Fellows with GovAI, as well as hiring a small team of supporting research personnel.
  • Supported research on European AI strategy and policy in association with CSER.
  • Combined immediate COVID-19 assistance with long-term benefits.

BERI is also supporting new existential risk research groups at other top universities, including: 

  • The Autonomous Learning Laboratory at UMass Amherst, led by Phil Thomas
  • Meir Friedenberg and Joe Halpern at Cornell
  • InterACT at UC Berkeley, led by Anca Dragan
  • The Stanford Existential Risks Initiative
  • Yale Effective Altruism, to support x-risk discussion groups
  • Baobao Zhang and Sarah Kreps at Cornell

Donating to BERI seems to me like the only way to give more money to AI Safety researchers at top universities. FHI, CHAI, and CSER aren't publicly seeking donations seemingly because anything you directly donate might end up either (a) replacing  funding they would've received from their university or other donors, or (b) being limited in terms of what they're allowed to spend it on. If that's true, then the only way to counterfactually increase funding at these groups is through BERI. 

If you would like, click here to donate to BERI

Founders Pledge Climate & Lifestyle Report

Thank you for sharing this, really love the Main Conclusions here. As usual with comments, most of what you’re saying makes sense to me, but I’d like to focus on one quibble about the presentation of your conclusions.

I think Figure 2 in the report could be easily be misinterpreted as strong evidence for a conclusion you later disavow: that by far the most important lifestyle choice for reducing your CO2 emissions is whether you have another child. The Key Takeaways section begins with this striking chart where the first bar is taller than all the rest added up, but the body paragraphs give context and caveats before finishing on a more sober conclusion. The conclusion makes perfect sense to me, but it’s the opposite of what I would’ve guessed looking at the first chart in the section. If you’re most confident in the estimates that account for government policy, you could make them alone your first chart, and only discuss the other (potentially misleading) estimates later.

I probably only noticed this because you’re discussing such a hot button issue. Footnotes work for dry academic questions, but when the question is having fewer kids to reduce carbon emissions, I start thinking about how Twitter and CNN would read this.

Anyways, hope that’s helpful, feel free to disagree, and thanks for the great research!

Make a Public Commitment to Writing EA Forum Posts

Your blog is awesome, looking forward to reading these posts and anything else you put on the Forum!

The emerging school of patient longtermism

I really like this kind of post from 80,000 Hours: a quick update on their general worldview. Patient philanthropy isn’t something I know much about, but this article makes me take it seriously and I’ll probably read what they recommend.

Another benefit of shorter updates might be sounding less conclusive and more thinking-out-loud. Comprehensive, thesis-driven articles might give readers the false impression that 80K is extremely confident in a particular belief, even when the article tries to accurately state the level of confidence. It’s hard to predict how messages will spread organically over time, but frequently releasing smaller updates might highlight that 80K’s thinking is uncertain and always changing. (Of course, the opposite could be true.)

Load More