Aidan O'Gara

Wiki Contributions

Comments

Should I have my full name as my username?

I use my full name as my username and frankly regret it. People who aren’t familiar with EA have googled me and found this, and it’s a lot to have my detailed opinions available publicly. Another username can easily be proven to be yours or linked to your real identity, but won’t be so easily googled. Would probably recommend alternate usernames to most people.

EA views on the AUKUS security pact?

Hey, this is a great topic and it’s really awesome that you’re writing a brief for such an influential audience. I haven’t seen and wouldn’t expect to see much EA-specific discussion of foreign policy, but I think this is a great place to have those discussions. I’m not an expert by any means, just have been following the news on this and related topics, so here’s a few off the cuff impressions.

I’ve been somewhat convinced that the US foreign policy aim and perhaps the best EA policy aim in relations with the current Chinese government is as stated by the hawkish American Senator Tom Cotton: “The ultimate objective of that strategy should be, to quote the document that launched this country’s ultimately successful strategy against the Soviet Union, the “breakup or the gradual mellowing” of the Chinese Communist Party’s (CCP) power.”

This line of thinking runs strongly against my humanitarian and cooperative instincts, instead drawing on philosophies of realpolitik and international conflict to argue that the West and the current Chinese regime have fundamentally different interests and cannot cooperate in the long run. China wants to be the world’s #1 power and quite possibly has the power to do so within the coming decades or century. If the current incarnation of the CCP continues to lead China during this time, we might continue to see human rights abuses, mass surveillance, the persecution of dissent, and travesties like the Uighur concentration camps for the duration of the regime. Since the 1990s, Western foreign policy has attempted to cooperate with China any bringing them into our economic sphere and hoping political change will follow. Tom Cotton’s “Beat China” paper articulates an emerging counter-consensus arguing that this Chinese regime will not be co-opted and must instead be defeated in the traditional sense. He says:

“The challenges of Nazi Germany, Imperial Japan, and the Soviet Union all ended with total American victory; the Cold War was even won without direct military conflict. Once again, America confronts a powerful totalitarian adversary that seeks to dominate Eurasia and remake the world order, albeit with its own unique and subtle approach.”

This to me is a good candidate for the best EA position on the CCP and China relations. If EA had been around during the Cold War, I hope we would’ve been anti-Soviet (though perhaps not to Red Scare levels). If the Chinese regime is similarly totalitarian, we should aim for their replacement as well.

Here is the link: https://www.cotton.senate.gov/imo/media/doc/210216_1700_China Report_FINAL.pdf

That’s only tangentially related to AUKUS, so to give some more direct thoughts:

I’m not sure why France was left out. One possibility is that nuclear subs were the crux to the deal, and that we could not form the deal without replacing France’s deal. But this seems unlikely, because the first nuclear subs under AUKUS will not be delivered until at least 2040(!!!). [1] Why didn’t we let France in on the deal?

One possibility is simply diplomatic incompetence. The US State Department has been gutted by hiring freezes and other legacies of the Trump administration, maybe we just forgot to take care of a crucial ally. But even then, there probably has to be some positive argument in favor of leaving out France.

One possibility that I have not seen discussed is that France has not contributed significantly to the NATO/Western military effort, having spent less than 2% of GDP on military for many years running [2]. If France will not contribute substantially, then perhaps the US and UK are finished giving France a free ride on their national defense and international prestige. The stakes are higher than they’ve been in 30 years; maybe it’s time to pay up or sit down. Of course, that’s all my baseless speculation.

[1]https://www.google.com/amp/s/amp.theaustralian.com.au/inquirer/aukus-alliance-nuclearpowered-subs-will-arrive-too-late-to-help-us-in-conflict/news-story/8e7e2160542136db9cdadf05362a548d

[2] https://tradingeconomics.com/france/military-expenditure-percent-of-gdp-wb-data.html

What problems in society are both mathematically challenging, verifiable, and competitively rewarding?

More of a skill set than a problem, but data science / machine learning would be my nomination. It’s one of the hottest fields for hiring right now, with computer science more generally being a top earning college major vs. lower earnings for fields like economics, mathematics, statistics, and physics. (See figures here: https://www.wsj.com/amp/articles/which-college-graduates-make-the-most-11574267424.) It’s very mathematically challenging, especially at the highest levels of ML. It doesn’t necessarily have the same gamesmanship aspect as trading stocks of profits depending on winning or losing against another human being, but you are optimizing models and being rewarded for predictive accuracy. (You could also try Kaggle if you’re really looking for competition.)

Most importantly from an EA perspective, it’s good training for contributing to AI Safety, but also offers great impact opportunities for the right person even if they never work on AI Safety. This post and comment describe some of opportunities for having impact with AI beyond working on AI safety, including biomedical research and public health research (Post: https://forum.effectivealtruism.org/posts/LHZBcqyCkYqmZLzij/?commentId=iea66e3TxsnFWbHoS#comments ).

Personally I studied economics and statistics before getting some work experience and realizing that CS and ML would be more useful across a broad range of roles. Maybe that’s my bias, but if you have math/STEM inclinations, I’d say you could do worse than learning some Python or majoring in CS.

Should you do a PhD in science?

Mental health effects are the reason I stopped considering doing a PhD. This might be specific to economics, but here’s one study that surveyed Econ PhD students:

“We find that 18% of graduate students experience moderate or severe symptoms of depression and anxiety — more than three times the population average — and 11% report suicidal ideation in a two-week period. The average PhD student reports greater feelings of loneliness than does the average retired American. Only 26% of Economics students report feeling that their work is useful always or most of the time, compared with 70% of Economics faculty and 63% of the working age population.”

https://scholar.harvard.edu/bolotnyy/publications/graduate-student-mental-health-lessons-american-economics-departments

Then again, a one-in-three chance of tenure after graduation isn’t so bad! (I did an estimate of my own using sources I’ve long since forgotten and came to a similar conclusion for economics — my guess was that 33% to 50% of graduates at top 30 schools get tenure track roles, and most of those end up getting tenure.)

EA Debate Championship & Lecture Series

Yeah, it’s kinda hilarious. Speaking so fast that your opponents can’t follow your arguments and therefore lose the round is common practice in some forms of competitive debate. But in other debate categories, using this tactic would immediately lose you the round. In my own personal experience of high school debate, the quality of competitive debate depends very heavily on the particular category of debate.

The video above is Policy Debate, the oldest form of debate which degenerated decades ago into unintelligible speed reading and arguments that every policy would result in worldwide nuclear annihilation. In the 1980s, the National Speech and Debate Association instituted a new form of debate called Lincoln Douglas that attempted to reground debate in commonsense questions about moral values; but LD has also fallen victim to speed reading and even galaxy-brained “kritiks” arguing that the structure of debate itself is racist or sexist and therefore that the round should be abandoned.

Public Forum debate, invented in 2002 as an antidote to Lincoln Douglas, is IMO a very healthy and educational form of debate. Here is the final round of the 2018 national championship (starting at 4:05) on the resolution, “On balance, the benefits of United States Participation in the North American Free Trade Agreement outweigh the consequences.” https://m.youtube.com/watch?v=MUnyLbeu7qU&feature=youtu.be

British Parliamentary debate is another form of debate that, in my experience, is more civil and less “game-able” than other forms of debate (though Harrison D disagrees below, with specifics about its pitfalls). One key difference is that, while Public Forum allows and encourages debaters to spend weeks researching and debating a single specific resolution, Parliamentary debate typically a involves generalized preparation on a subject or theme and only reveals the specific resolution a few minutes before the round begins. Because of this, I think Public Forum is more educational for debaters, but Parliamentary is probably easier to run a one-off tournament because debaters won’t be expected to have done as much preparation.

Extemporaneous Speaking is another category involving less preparation, where participants are asked a question about current affairs or politics and have 30 minutes to prepare a 7 minute off-the-cuff speech. There is no “opponent” in Extemp, perhaps limiting the level of discourse, but it might be possible to easily introduce EA-related topics because participants are expected to be conversant in a wide range of topic areas.

On the whole, I’m very glad to see this EA debate tournament being run, and would be very excited to see further work bringing EA topics into debate. I can understand why many people might find some debate tactics toxic and counterproductive, particularly in categories like Policy and LD, but I do think this is the failure of specific categories and tactics and not an indictment of all adversarial debate. Learning the best arguments for both sides of a resolution certainly teaches a bit of an “arguments as soldiers” approach, but I believe the greater effect is to lead debaters to real truths about which arguments are stronger and improve their personal understanding of the issues. In future EA debate events, I would only suggest that organizers be very conscious of these standards and norms when choosing a specific category of debate.

Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain

Two good examples mentioned by Ajeya on the 80,000 Hours podcast: eyes vs cameras, and leaves vs solar panels.

My Career Decision-Making Process

Thanks, that makes sense. Freelancing in software development and tech seems to me like a reasonable path to a well-paid part-time gig for many people. I wonder what other industries or backgrounds lend themselves towards these kinds of jobs.

While this is fascinating, I’d be most interested in your views on AI for Good, healthcare, and the intersection between the two, as potential EA cause areas.

Your views, as I understand them (and please correct me where I’m wrong): You see opportunity for impact in applying AI and ML techniques to solve real-world problems. Examples include forecasting floods and earthquakes, or analyzing digital data on health outcomes. You’re concerned that there might already be enough talented people working on the most impactful projects, thereby reducing your counterfactual impact, but you see opportunities for outsize impact when working on a particularly important problem or making a large counterfactual contribution as an entrepreneur.

Without having done a fraction of the research you clearly have, I’m hopeful that you’re right about health. Anti-aging research and pandemic preparedness seem to be driving EA interest into healthcare and medicine more broadly, and I’m wondering if more mainstream careers in medical research and public health might be potentially quite impactful, if only from a near-term perspective. Would be interested in your thoughts on which problems are high impact, how to identify impactful opportunities when you see them, and perhaps the overall potential of the field for EA — as well as anything anyone else has written on these topics.

AI for Good seems like a robustly good career path in many ways, especially for someone interested in AI Safety (which, as you note, you are not). Your direct impact could be anywhere from “providing a good product to paying customers” to “solving the world’s most pressing problems with ML.” You can make a good amount of money and donate a fraction of it. You’ll meet an ambitious network of people, learn the soft skills of business, and receive a widely respected credential — valuable capital for any career. Crucial from my perspective, you’d learn how to develop and deploy AI in the real-world, which I think could be very helpful when transitioning to a career in AI technical safety research or AI policy. (AI Safety people, agree or disagree that this experience would be useful?)

Do you have further thoughts about how do have an impactful career doing AI for Good? Where are the highest impact positions? How do you enter the field, what qualifications and skills do you need? How can someone judge for themselves the opportunity for impact in a particular role?

Thank you! It’s inspiring and informative to see someone doing such thorough and independent cause prioritization research for their own career.

My Career Decision-Making Process

Really great post, thank you! You discuss the possibility of "part-time earning to give while simultaneously running side projects" and note that you've chosen to work part-time on a PhD in Computational Healthcare while also working a separate part-time job for earning to give. 

Part-time earning to give seems like an interesting possibility I hadn't considered before, mainly because I assumed there are very few part-time jobs that pay well. What has been your experience here? Do you have a unique opportunity that allows you to earn a lot part-time? Perhaps you've worked as an  consultant or independent contractor who sets their own hours? What jobs have you considered here? More broadly, would you expect most college-educated people to be able to find part-time work that pays proportionally as well as what they'd earn working full-time? (Not looking for any definitive conclusion on the topic, just your off-the-cuff impressions)

Thanks again, and good luck with your new career plans!

Aidan O'Gara's Shortform

Three Scenarios for AI Progress

How will AI develop over the next few centuries? Three scenarios seem particularly likely to me: 

  • "Solving Intelligence": Within the next 50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system, by using massive compute within our current ML paradigm.
  • "Comprehensive AI Systems": Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop supervision, but soon enough we hit annual GDP growth of 25%.
  • "No takeoff": Looks qualitatively similar to the above, except growth remains steady around 2% for at least several centuries. We remain in the economic paradigm of the Industrial Revolution, and AI makes an economic contribution similar to that of electricity or oil without launching us into a new period of human history. Progress continues as usual.

For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:

  • When do we achieve TAI? AGI? Superintelligence? How fast is takeoff? Who builds it? How much compute does it require? How much does that cost? Agent or Tool? Is machine learning the paradigm, or do we have another fundamental shift in research direction? What are the key AI Safety challenges? Who is best positioned to contribute?

The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each. 

Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]

  • Solving Intelligence: Within the next 20-50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system.
    • Machine learning is the paradigm that brings us to superintelligence. Most progress is driven by compute. Our algorithms are similar to the human brain, and therefore require similar amounts of compute.
    • It becomes a compute war. You're taking the same fundamental algorithms and spending a hundred billion dollars on compute, and it works. (Informed by Ajeya's report, IMO the most important upshot of which being that spending a truly massive amount of money can cover a sizeable portion of the difference between our current compute and the compute of the human brain. If human brain-level compute is an important threshold, then the few actors who could  spend $100B+ are have an advantage of decades against against actors who can only spend millions. Would like to discuss this further.)
    • This is most definitely not CAIS. There would be one or two or ten superintelligent AI systems,  but not two million.
    • Very few people  can contribute effectively to AI Safety, because to contribute effectively you have to be at one of only a handful of organizations in the world. You need to be in "the room where it happens", whether that's the AI lab developing the superintelligence or the government attempting to monitor the project. The handful of people who can contribute  are incredibly valuable.
    • What AI safety stuff matters?
      • Technical AI safety research. The people right now who are building AI that scales safely. It turns out you can do effective research now because our current methods are the methods that bring us to superintelligence, and whether or not our current research is good enough determines whether or not we survive.
      • Highest levels of government, for their ability to regulate AI labs. A project like this could be nationalized, or carried out under strict oversight from government regulators. Realistically I'd expect the opposite, that governments would be too slow to see the risks and rewards in such a technical domain.
      • People who imagine long-term policies for governing AI. I don't know how much  useful work exists here, but I have to imagine there's some good stuff about how to run the world undersuperintelligence. What's the game theory of multipolar scenarios? What are the points of decisive strategic advantage?
  • Comprehensive AI Systems: Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop supervision, but soon enough we hit annual GDP growth of 25%.
    • Governments go about international relations the same as usual, just with better weapons. There's some strategic effects of this that Henry Kissinger and Justin Ding understand quite well, but there's no instant collapse into one world government or anything. There's a few outside risks here that would be terrible (a new WMD, or missile defense systems that threaten MAD), but basically we just get killer robots, which will probably be fine.
      • Killer robots are a key AI safety training ground. If they're inevitable, we should be integrated within enemy lines in order to deploy safely.
    • We have lots of warning shots.
    • What are the existential risks? Nuclear war. Autonomous weapons accidents, which I suppose could turn out to be existential?? Long-term misalignment: over the next 300 years, we hand off the fate of the universe to the robots, and it's not quite the right trajectory.
    • What AI Safety work is most valuable?
      • Run-of-the-mill AI Policy work. Accomplishing normal government objectives often unrelated to existential risk specifically, by driving forward AI progress in a technically-literate and altruistically-thoughtful way.
      • Driving forward AI progress. It's a valuable technology that will help lots of people, and accelerating its arrival is a good thing.
        • With particular attention to safety. Building a CS culture, a Silicon Valley, a regulatory environment, and international cooperation that will sustain the three hundred year transition.
      • Working in military AI systems. They're the killer robots most likely to run amok and kill some people (or 7 billion). Malfunctioning AI can also cause nuclear war by setting off geopolitical conflict. Also new WMDs would be terrible.
  • No takeoff: Looks qualitatively similar to the above, except growth remains steady around 2% for at least several centuries. We remain in the economic paradigm of the Industrial Revolution, and AI makes an economic contribution similar to that of electricity or oil without launching us into a new period of human history.
    • This seems entirely possible, maybe even the most likely outcome. I've been surrounded by people talking about short timelines from a pretty young age so I never really thought about this possibility, but "takeoff" is not guaranteed. The world in 500 years could resemble the world today; in fact, I'd guess most thoughtful people don't think much about transformative AI and would assume that this is the default scenario.
    • Part of why I think this is entirely plausible is because I don't see many independently strong arguments for short AI timelines:
      • IMO the strongest argument for short timelines is that, within the next few decades, we'll cross the threshold for using more compute than the human brain. If this turns out to be a significant threshold and a fair milestone to anchor against, then we could hit an inflection point and rapidly see Bostrom Superintelligence-type scenarios.
        • I see this belief as closely associated with the entire first scenario described above: Held by OpenAI/DeepMind, the idea that we will "solve intelligence" with an agenty AI running a simple fundamental algorithm with massive compute and effectively generalizing across many domains.
      • IIRC, the most prominent early argument for short AI timelines, as discussed by Bostrom, Yudkowsky, and others, was recursive self-improvement. The AI will build smarter AIs, meaning we'll eventually hit an inflection point of runaway improvement positively feeding into itself and rapidly escalating from near-human to lightyears-beyond-human intelligence. This argument seems less popular in recent years, though I couldn't say exactly why. My only opinion would be that this seems more like an argument for "fast takeoff" (once we have near-human level AI systems for building AI systems, we'll quickly achieve superhuman performance in that area), but does not tell you when that takeoff will occur. For all we know, this fast takeoff could happen in hundreds of years. (Or I could be misunderstanding the argument here, I'd like to think more about it.)
      • Surveys asking AI researchers when they expect superhuman AI have received lots of popular coverage and might be driving widespread acceptance of short timelines. My very subjective and underinformed intution puts little weight on these surveys compared to the object level arguments. The fact that people trying to build superintelligence believe it's possible within their lifetime certainly makes me take that possibility seriously, but it doesn't provide much of an upper bound on how long it might take. If the current consensus of AI researchers proves to be wrong about progress over the next century, I wouldn't expect their beliefs about the next five or ten centuries to hold up - the worldview assumptions might just be entirely off-base.
      • These are the only three arguments for short timelines I've ever heard and remembered. Interested if I'm forgetting anything big here.
      • Compare this to the simple prior that history will continue with slow and steady single-digit growth as it has since the Industrial Revolution, and I see a significant chance that we don't see AI takeoff for centuries, if ever. (That's before considering object level arguments for longer timelines, which admittedly I don't see many of, and therefore I don't put much weight on.)
    • I haven't fully thought through all of this, but would love to hear others thoughts on the probability of "no takeoff".
    • Maybe the future of AI looks like this guy on the internet’s business slide deck: https://static1.squarespace.com/static/50363cf324ac8e905e7df861/t/5e45cbd35750af6b4e60ab0f/1581632599540/2017+Benedict+Evans+Ten+Year+Futures.pdf 

This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios. 

Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.

Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain

This is really persuasive to me, thanks for posting. Previously I’d heard arguments anchoring AGI timelines to the amount of compute used by the human brain, but I didn’t see much reason at all for our algorithms to use the same amount of compute as the brain. But you point to the example of flight, where all the tricky issues of how to get something to fly were quickly solved almost as soon as we built engines as powerful as birds. Now I’m wondering if this is a pattern we’ve seen many times — if so, I’d be much more open to anchoring AI timelines on the amount of compute used by the human brain (which would mean significantly shorter timelines than I’d currently expect).

So my question going forward would be: What other machines have humans built to mimic the functionality of living organisms? In these cases, do we see a single factor driving most progress, like engine power or computing power? If so, do machines perform as well as living organisms with similar levels of this key variable? Or, does the human breakthrough to performing on-par with evolution come at a more random point, driven primarily by one-off insights or by a bunch of non-obvious variables?

Within AI, you could examine how much compute it took to mimic certain functions of organic brains. How much compute does it take to build human-level speech recognition or image classification, and how does that compare to the compute used in the corresponding areas of the human brain? (Joseph Carlsmith’s OpenPhil investigation of human level compute covered similar territory and might be helpful here, but I haven’t gone through it in enough detail to know.)

Does transportation offer other examples? Analogues between boats and fish? Land travel and fast mammals?

I’m having trouble thinking of good analogues, but I’m guessing they have to exist. AI Impacts’ discontinuities investigation feels like a similar type of question about examples of historical technological progress, and it seems to have proven tractable to research and useful once answered. I’d be very interested in further research in this vein — anchoring AGI timelines to human compute estimates seems to me like the best argument (even the only good argument?) for short timelines, and this post alone makes those arguments much more convincing to me.

Load More