I use my full name as my username and frankly regret it. People who aren’t familiar with EA have googled me and found this, and it’s a lot to have my detailed opinions available publicly. Another username can easily be proven to be yours or linked to your real identity, but won’t be so easily googled. Would probably recommend alternate usernames to most people.
Hey, this is a great topic and it’s really awesome that you’re writing a brief for such an influential audience. I haven’t seen and wouldn’t expect to see much EA-specific discussion of foreign policy, but I think this is a great place to have those discussions. I’m not an expert by any means, just have been following the news on this and related topics, so here’s a few off the cuff impressions.
I’ve been somewhat convinced that the US foreign policy aim and perhaps the best EA policy aim in relations with the current Chinese government is as stated by the hawkish American Senator Tom Cotton: “The ultimate objective of that strategy should be, to quote the document that launched this country’s ultimately successful strategy against the Soviet Union, the “breakup or the gradual mellowing” of the Chinese Communist Party’s (CCP) power.”
This line of thinking runs strongly against my humanitarian and cooperative instincts, instead drawing on philosophies of realpolitik and international conflict to argue that the West and the current Chinese regime have fundamentally different interests and cannot cooperate in the long run. China wants to be the world’s #1 power and quite possibly has the power to do so within the coming decades or century. If the current incarnation of the CCP continues to lead China during this time, we might continue to see human rights abuses, mass surveillance, the persecution of dissent, and travesties like the Uighur concentration camps for the duration of the regime. Since the 1990s, Western foreign policy has attempted to cooperate with China any bringing them into our economic sphere and hoping political change will follow. Tom Cotton’s “Beat China” paper articulates an emerging counter-consensus arguing that this Chinese regime will not be co-opted and must instead be defeated in the traditional sense. He says:
“The challenges of Nazi Germany, Imperial Japan, and the Soviet Union all ended with total American victory; the Cold War was even won without direct military conflict. Once again, America confronts a powerful totalitarian adversary that seeks to dominate Eurasia and remake the world order, albeit with its own unique and subtle approach.”
This to me is a good candidate for the best EA position on the CCP and China relations. If EA had been around during the Cold War, I hope we would’ve been anti-Soviet (though perhaps not to Red Scare levels). If the Chinese regime is similarly totalitarian, we should aim for their replacement as well.
Here is the link: https://www.cotton.senate.gov/imo/media/doc/210216_1700_China Report_FINAL.pdf
That’s only tangentially related to AUKUS, so to give some more direct thoughts:
I’m not sure why France was left out. One possibility is that nuclear subs were the crux to the deal, and that we could not form the deal without replacing France’s deal. But this seems unlikely, because the first nuclear subs under AUKUS will not be delivered until at least 2040(!!!).  Why didn’t we let France in on the deal?
One possibility is simply diplomatic incompetence. The US State Department has been gutted by hiring freezes and other legacies of the Trump administration, maybe we just forgot to take care of a crucial ally. But even then, there probably has to be some positive argument in favor of leaving out France.
One possibility that I have not seen discussed is that France has not contributed significantly to the NATO/Western military effort, having spent less than 2% of GDP on military for many years running . If France will not contribute substantially, then perhaps the US and UK are finished giving France a free ride on their national defense and international prestige. The stakes are higher than they’ve been in 30 years; maybe it’s time to pay up or sit down. Of course, that’s all my baseless speculation.
More of a skill set than a problem, but data science / machine learning would be my nomination. It’s one of the hottest fields for hiring right now, with computer science more generally being a top earning college major vs. lower earnings for fields like economics, mathematics, statistics, and physics. (See figures here: https://www.wsj.com/amp/articles/which-college-graduates-make-the-most-11574267424.) It’s very mathematically challenging, especially at the highest levels of ML. It doesn’t necessarily have the same gamesmanship aspect as trading stocks of profits depending on winning or losing against another human being, but you are optimizing models and being rewarded for predictive accuracy. (You could also try Kaggle if you’re really looking for competition.)
Most importantly from an EA perspective, it’s good training for contributing to AI Safety, but also offers great impact opportunities for the right person even if they never work on AI Safety. This post and comment describe some of opportunities for having impact with AI beyond working on AI safety, including biomedical research and public health research (Post: https://forum.effectivealtruism.org/posts/LHZBcqyCkYqmZLzij/?commentId=iea66e3TxsnFWbHoS#comments ).
Personally I studied economics and statistics before getting some work experience and realizing that CS and ML would be more useful across a broad range of roles. Maybe that’s my bias, but if you have math/STEM inclinations, I’d say you could do worse than learning some Python or majoring in CS.
Mental health effects are the reason I stopped considering doing a PhD. This might be specific to economics, but here’s one study that surveyed Econ PhD students:
“We find that 18% of graduate students experience moderate or severe symptoms of depression and anxiety — more than three times the population average — and 11% report suicidal ideation in a two-week period. The average PhD student reports greater feelings of loneliness than does the average retired American. Only 26% of Economics students report feeling that their work is useful always or most of the time, compared with 70% of Economics faculty and 63% of the working age population.”
Then again, a one-in-three chance of tenure after graduation isn’t so bad! (I did an estimate of my own using sources I’ve long since forgotten and came to a similar conclusion for economics — my guess was that 33% to 50% of graduates at top 30 schools get tenure track roles, and most of those end up getting tenure.)
Yeah, it’s kinda hilarious. Speaking so fast that your opponents can’t follow your arguments and therefore lose the round is common practice in some forms of competitive debate. But in other debate categories, using this tactic would immediately lose you the round. In my own personal experience of high school debate, the quality of competitive debate depends very heavily on the particular category of debate.
The video above is Policy Debate, the oldest form of debate which degenerated decades ago into unintelligible speed reading and arguments that every policy would result in worldwide nuclear annihilation. In the 1980s, the National Speech and Debate Association instituted a new form of debate called Lincoln Douglas that attempted to reground debate in commonsense questions about moral values; but LD has also fallen victim to speed reading and even galaxy-brained “kritiks” arguing that the structure of debate itself is racist or sexist and therefore that the round should be abandoned.
Public Forum debate, invented in 2002 as an antidote to Lincoln Douglas, is IMO a very healthy and educational form of debate. Here is the final round of the 2018 national championship (starting at 4:05) on the resolution, “On balance, the benefits of United States Participation in the North American Free Trade Agreement outweigh the consequences.” https://m.youtube.com/watch?v=MUnyLbeu7qU&feature=youtu.be
British Parliamentary debate is another form of debate that, in my experience, is more civil and less “game-able” than other forms of debate (though Harrison D disagrees below, with specifics about its pitfalls). One key difference is that, while Public Forum allows and encourages debaters to spend weeks researching and debating a single specific resolution, Parliamentary debate typically a involves generalized preparation on a subject or theme and only reveals the specific resolution a few minutes before the round begins. Because of this, I think Public Forum is more educational for debaters, but Parliamentary is probably easier to run a one-off tournament because debaters won’t be expected to have done as much preparation.
Extemporaneous Speaking is another category involving less preparation, where participants are asked a question about current affairs or politics and have 30 minutes to prepare a 7 minute off-the-cuff speech. There is no “opponent” in Extemp, perhaps limiting the level of discourse, but it might be possible to easily introduce EA-related topics because participants are expected to be conversant in a wide range of topic areas.
On the whole, I’m very glad to see this EA debate tournament being run, and would be very excited to see further work bringing EA topics into debate. I can understand why many people might find some debate tactics toxic and counterproductive, particularly in categories like Policy and LD, but I do think this is the failure of specific categories and tactics and not an indictment of all adversarial debate. Learning the best arguments for both sides of a resolution certainly teaches a bit of an “arguments as soldiers” approach, but I believe the greater effect is to lead debaters to real truths about which arguments are stronger and improve their personal understanding of the issues. In future EA debate events, I would only suggest that organizers be very conscious of these standards and norms when choosing a specific category of debate.
Two good examples mentioned by Ajeya on the 80,000 Hours podcast: eyes vs cameras, and leaves vs solar panels.
Thanks, that makes sense. Freelancing in software development and tech seems to me like a reasonable path to a well-paid part-time gig for many people. I wonder what other industries or backgrounds lend themselves towards these kinds of jobs.
While this is fascinating, I’d be most interested in your views on AI for Good, healthcare, and the intersection between the two, as potential EA cause areas.
Your views, as I understand them (and please correct me where I’m wrong): You see opportunity for impact in applying AI and ML techniques to solve real-world problems. Examples include forecasting floods and earthquakes, or analyzing digital data on health outcomes. You’re concerned that there might already be enough talented people working on the most impactful projects, thereby reducing your counterfactual impact, but you see opportunities for outsize impact when working on a particularly important problem or making a large counterfactual contribution as an entrepreneur.
Without having done a fraction of the research you clearly have, I’m hopeful that you’re right about health. Anti-aging research and pandemic preparedness seem to be driving EA interest into healthcare and medicine more broadly, and I’m wondering if more mainstream careers in medical research and public health might be potentially quite impactful, if only from a near-term perspective. Would be interested in your thoughts on which problems are high impact, how to identify impactful opportunities when you see them, and perhaps the overall potential of the field for EA — as well as anything anyone else has written on these topics.
AI for Good seems like a robustly good career path in many ways, especially for someone interested in AI Safety (which, as you note, you are not). Your direct impact could be anywhere from “providing a good product to paying customers” to “solving the world’s most pressing problems with ML.” You can make a good amount of money and donate a fraction of it. You’ll meet an ambitious network of people, learn the soft skills of business, and receive a widely respected credential — valuable capital for any career. Crucial from my perspective, you’d learn how to develop and deploy AI in the real-world, which I think could be very helpful when transitioning to a career in AI technical safety research or AI policy. (AI Safety people, agree or disagree that this experience would be useful?)
Do you have further thoughts about how do have an impactful career doing AI for Good? Where are the highest impact positions? How do you enter the field, what qualifications and skills do you need? How can someone judge for themselves the opportunity for impact in a particular role?
Thank you! It’s inspiring and informative to see someone doing such thorough and independent cause prioritization research for their own career.
Really great post, thank you! You discuss the possibility of "part-time earning to give while simultaneously running side projects" and note that you've chosen to work part-time on a PhD in Computational Healthcare while also working a separate part-time job for earning to give.
Part-time earning to give seems like an interesting possibility I hadn't considered before, mainly because I assumed there are very few part-time jobs that pay well. What has been your experience here? Do you have a unique opportunity that allows you to earn a lot part-time? Perhaps you've worked as an consultant or independent contractor who sets their own hours? What jobs have you considered here? More broadly, would you expect most college-educated people to be able to find part-time work that pays proportionally as well as what they'd earn working full-time? (Not looking for any definitive conclusion on the topic, just your off-the-cuff impressions)
Thanks again, and good luck with your new career plans!
Three Scenarios for AI Progress
How will AI develop over the next few centuries? Three scenarios seem particularly likely to me:
For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:
The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each.
Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]
This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios.
Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.
This is really persuasive to me, thanks for posting. Previously I’d heard arguments anchoring AGI timelines to the amount of compute used by the human brain, but I didn’t see much reason at all for our algorithms to use the same amount of compute as the brain. But you point to the example of flight, where all the tricky issues of how to get something to fly were quickly solved almost as soon as we built engines as powerful as birds. Now I’m wondering if this is a pattern we’ve seen many times — if so, I’d be much more open to anchoring AI timelines on the amount of compute used by the human brain (which would mean significantly shorter timelines than I’d currently expect).
So my question going forward would be: What other machines have humans built to mimic the functionality of living organisms? In these cases, do we see a single factor driving most progress, like engine power or computing power? If so, do machines perform as well as living organisms with similar levels of this key variable? Or, does the human breakthrough to performing on-par with evolution come at a more random point, driven primarily by one-off insights or by a bunch of non-obvious variables?
Within AI, you could examine how much compute it took to mimic certain functions of organic brains. How much compute does it take to build human-level speech recognition or image classification, and how does that compare to the compute used in the corresponding areas of the human brain? (Joseph Carlsmith’s OpenPhil investigation of human level compute covered similar territory and might be helpful here, but I haven’t gone through it in enough detail to know.)
Does transportation offer other examples? Analogues between boats and fish? Land travel and fast mammals?
I’m having trouble thinking of good analogues, but I’m guessing they have to exist. AI Impacts’ discontinuities investigation feels like a similar type of question about examples of historical technological progress, and it seems to have proven tractable to research and useful once answered. I’d be very interested in further research in this vein — anchoring AGI timelines to human compute estimates seems to me like the best argument (even the only good argument?) for short timelines, and this post alone makes those arguments much more convincing to me.