Two good examples mentioned by Ajeya on the 80,000 Hours podcast: eyes vs cameras, and leaves vs solar panels.
Thanks, that makes sense. Freelancing in software development and tech seems to me like a reasonable path to a well-paid part-time gig for many people. I wonder what other industries or backgrounds lend themselves towards these kinds of jobs.
While this is fascinating, I’d be most interested in your views on AI for Good, healthcare, and the intersection between the two, as potential EA cause areas.
Your views, as I understand them (and please correct me where I’m wrong): You see opportunity for impact in applying AI and ML techniques to solve real-world problems. Examples include forecasting floods and earthquakes, or analyzing digital data on health outcomes. You’re concerned that there might already be enough talented people working on the most impactful projects, thereby reducing your counterfactual impact, but you see opportunities for outsize impact when working on a particularly important problem or making a large counterfactual contribution as an entrepreneur.
Without having done a fraction of the research you clearly have, I’m hopeful that you’re right about health. Anti-aging research and pandemic preparedness seem to be driving EA interest into healthcare and medicine more broadly, and I’m wondering if more mainstream careers in medical research and public health might be potentially quite impactful, if only from a near-term perspective. Would be interested in your thoughts on which problems are high impact, how to identify impactful opportunities when you see them, and perhaps the overall potential of the field for EA — as well as anything anyone else has written on these topics.
AI for Good seems like a robustly good career path in many ways, especially for someone interested in AI Safety (which, as you note, you are not). Your direct impact could be anywhere from “providing a good product to paying customers” to “solving the world’s most pressing problems with ML.” You can make a good amount of money and donate a fraction of it. You’ll meet an ambitious network of people, learn the soft skills of business, and receive a widely respected credential — valuable capital for any career. Crucial from my perspective, you’d learn how to develop and deploy AI in the real-world, which I think could be very helpful when transitioning to a career in AI technical safety research or AI policy. (AI Safety people, agree or disagree that this experience would be useful?)
Do you have further thoughts about how do have an impactful career doing AI for Good? Where are the highest impact positions? How do you enter the field, what qualifications and skills do you need? How can someone judge for themselves the opportunity for impact in a particular role?
Thank you! It’s inspiring and informative to see someone doing such thorough and independent cause prioritization research for their own career.
Really great post, thank you! You discuss the possibility of "part-time earning to give while simultaneously running side projects" and note that you've chosen to work part-time on a PhD in Computational Healthcare while also working a separate part-time job for earning to give.
Part-time earning to give seems like an interesting possibility I hadn't considered before, mainly because I assumed there are very few part-time jobs that pay well. What has been your experience here? Do you have a unique opportunity that allows you to earn a lot part-time? Perhaps you've worked as an consultant or independent contractor who sets their own hours? What jobs have you considered here? More broadly, would you expect most college-educated people to be able to find part-time work that pays proportionally as well as what they'd earn working full-time? (Not looking for any definitive conclusion on the topic, just your off-the-cuff impressions)
Thanks again, and good luck with your new career plans!
Three Scenarios for AI Progress
How will AI develop over the next few centuries? Three scenarios seem particularly likely to me:
For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:
The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each.
Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]
This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios.
Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.
This is really persuasive to me, thanks for posting. Previously I’d heard arguments anchoring AGI timelines to the amount of compute used by the human brain, but I didn’t see much reason at all for our algorithms to use the same amount of compute as the brain. But you point to the example of flight, where all the tricky issues of how to get something to fly were quickly solved almost as soon as we built engines as powerful as birds. Now I’m wondering if this is a pattern we’ve seen many times — if so, I’d be much more open to anchoring AI timelines on the amount of compute used by the human brain (which would mean significantly shorter timelines than I’d currently expect).
So my question going forward would be: What other machines have humans built to mimic the functionality of living organisms? In these cases, do we see a single factor driving most progress, like engine power or computing power? If so, do machines perform as well as living organisms with similar levels of this key variable? Or, does the human breakthrough to performing on-par with evolution come at a more random point, driven primarily by one-off insights or by a bunch of non-obvious variables?
Within AI, you could examine how much compute it took to mimic certain functions of organic brains. How much compute does it take to build human-level speech recognition or image classification, and how does that compare to the compute used in the corresponding areas of the human brain? (Joseph Carlsmith’s OpenPhil investigation of human level compute covered similar territory and might be helpful here, but I haven’t gone through it in enough detail to know.)
Does transportation offer other examples? Analogues between boats and fish? Land travel and fast mammals?
I’m having trouble thinking of good analogues, but I’m guessing they have to exist. AI Impacts’ discontinuities investigation feels like a similar type of question about examples of historical technological progress, and it seems to have proven tractable to research and useful once answered. I’d be very interested in further research in this vein — anchoring AGI timelines to human compute estimates seems to me like the best argument (even the only good argument?) for short timelines, and this post alone makes those arguments much more convincing to me.
What impact do you think you were able to have as a State Rep? Are there any specific projects or policies you’re particularly proud of?
Yes, looks like LTFF is also looking for funding. Edited, thanks.
Fascinating that very few top AI Safety organizations are looking for more funding. By my count, only 4 of these 17 organizations are even publicly requesting donations this year: three independent research groups (GCRI, CLR, and AI Impacts) and an operations org (BERI). Across the board, it doesn't seem like AI Safety is very funding constrained.
Based on this report, I think the best donation opportunity among these orgs is BERI, the Berkeley Existential Risk Initiative. Larks says that BERI "provides support to existential risk groups at top universities to facilitate activities (like hiring engineers and assistants) that would be hard within the university context." According to BERI's blog post requesting donations, this support includes:
BERI is also supporting new existential risk research groups at other top universities, including:
Donating to BERI seems to me like the only way to give more money to AI Safety researchers at top universities. FHI, CHAI, and CSER aren't publicly seeking donations seemingly because anything you directly donate might end up either (a) replacing funding they would've received from their university or other donors, or (b) being limited in terms of what they're allowed to spend it on. If that's true, then the only way to counterfactually increase funding at these groups is through BERI.
If you would like, click here to donate to BERI.
Thank you for sharing this, really love the Main Conclusions here. As usual with comments, most of what you’re saying makes sense to me, but I’d like to focus on one quibble about the presentation of your conclusions.
I think Figure 2 in the report could be easily be misinterpreted as strong evidence for a conclusion you later disavow: that by far the most important lifestyle choice for reducing your CO2 emissions is whether you have another child. The Key Takeaways section begins with this striking chart where the first bar is taller than all the rest added up, but the body paragraphs give context and caveats before finishing on a more sober conclusion. The conclusion makes perfect sense to me, but it’s the opposite of what I would’ve guessed looking at the first chart in the section. If you’re most confident in the estimates that account for government policy, you could make them alone your first chart, and only discuss the other (potentially misleading) estimates later.
I probably only noticed this because you’re discussing such a hot button issue. Footnotes work for dry academic questions, but when the question is having fewer kids to reduce carbon emissions, I start thinking about how Twitter and CNN would read this.
Anyways, hope that’s helpful, feel free to disagree, and thanks for the great research!
Yeah, it’s kinda hilarious. Speaking so fast that your opponents can’t follow your arguments and therefore lose the round is common practice in some forms of competitive debate. But in other debate categories, using this tactic would immediately lose you the round. In my own personal experience of high school debate, the quality of competitive debate depends very heavily on the particular category of debate.
The video above is Policy Debate, the oldest form of debate which degenerated decades ago into unintelligible speed reading and arguments that every policy would result in worldwide nuclear annihilation. In the 1980s, the National Speech and Debate Association instituted a new form of debate called Lincoln Douglas that attempted to reground debate in commonsense questions about moral values; but LD has also fallen victim to speed reading and even galaxy-brained “kritiks” arguing that the structure of debate itself is racist or sexist and therefore that the round should be abandoned.
Public Forum debate, invented in 2002 as an antidote to Lincoln Douglas, is IMO a very healthy and educational form of debate. Here is the final round of the 2018 national championship (starting at 4:05) on the resolution, “On balance, the benefits of United States Participation in the North American Free Trade Agreement outweigh the consequences.” https://m.youtube.com/watch?v=MUnyLbeu7qU&feature=youtu.be
British Parliamentary debate is another form of debate that, in my experience, is more civil and less “game-able” than other forms of debate (though Harrison D disagrees below, with specifics about its pitfalls). One key difference is that, while Public Forum allows and encourages debaters to spend weeks researching and debating a single specific resolution, Parliamentary debate typically a involves generalized preparation on a subject or theme and only reveals the specific resolution a few minutes before the round begins. Because of this, I think Public Forum is more educational for debaters, but Parliamentary is probably easier to run a one-off tournament because debaters won’t be expected to have done as much preparation.
Extemporaneous Speaking is another category involving less preparation, where participants are asked a question about current affairs or politics and have 30 minutes to prepare a 7 minute off-the-cuff speech. There is no “opponent” in Extemp, perhaps limiting the level of discourse, but it might be possible to easily introduce EA-related topics because participants are expected to be conversant in a wide range of topic areas.
On the whole, I’m very glad to see this EA debate tournament being run, and would be very excited to see further work bringing EA topics into debate. I can understand why many people might find some debate tactics toxic and counterproductive, particularly in categories like Policy and LD, but I do think this is the failure of specific categories and tactics and not an indictment of all adversarial debate. Learning the best arguments for both sides of a resolution certainly teaches a bit of an “arguments as soldiers” approach, but I believe the greater effect is to lead debaters to real truths about which arguments are stronger and improve their personal understanding of the issues. In future EA debate events, I would only suggest that organizers be very conscious of these standards and norms when choosing a specific category of debate.