lukeprog

Comments

Why AI is Harder Than We Think - Melanie Mitchell

I wish "relative skeptics" about deep learning capability timelines such as Melanie Mitchell and Gary Marcus would move beyond qualitative arguments and try to build models and make quantified predictions about how quickly they expect things to proceed, a la Cotra (2020) or Davidson (2021) or even Kurzweil. As things stand today, I can't even tell whether Mitchell or Marcus have more or less optimistic timelines than the people who have made quantified predictions, including e.g. authors from top ML conferences.

International cooperation as a tool to reduce two x-risks.

I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it's hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I'm exploring in the AI governance area, but I'm not sure how valuable and tractable they'll look upon further inspection. If you're curious, some concrete ideas in the AI space are laid out here and here.

EA Debate Championship & Lecture Series

This seems great to me, please do more.

Strong Longtermism, Irrefutability, and Moral Progress

I know I'm late to the discussion, but…

I agree with AGB's comment, but I would also like to add that strong longtermism seems like a moral perspective with much less "natural" appeal, and thus much less ultimate growth potential, than neartermist EA causes such as global poverty reduction or even animal welfare.

For example, I'm a Program Officer in the longtermist part of Open Philanthropy, but >80% of my grantmaking dollars go to people who are not longtermists (who are nevertheless doing work I think is helpful for certain longtermist goals). Why? Because there are almost no longtermists anywhere in the world, and even fewer who happen to have the skills and interests that make them a fit for my particular grantmaking remit. Meanwhile, Open Philanthropy makes far more grants in neartermist causes (though this might change in the future), in part because there are tons of people who are excited about doing cost-effective things to help humans and animals who are alive and visibly suffering today, and not so many people who are excited about trying to help hypothetical people living millions of years in the future.

Of course to some degree this is because longtermism is fairly new, though I would date it at least as far back as Bostrom's "Astronomical Waste" paper from 2003.

I would also like to note that many people I speak to who identify (like me) as "primarily longtermist" have sympathy (like me) for something like "worldview diversification," given the deep uncertainties involved in the quest to help others as much as possible. So e.g. while I spend most of my own time on longtermism-motivated efforts, I also help out with other EA causes in various ways (e.g. this giant project on animal sentience), and I link to or talk positively about GiveWell top charities a lot, and I mostly avoid eating non-AWA meat, and so on… rather than treating these non-longtermist priorities as a rounding error. Of course some longtermists take a different approach than I do, but I'm hardly alone in my approach.

Forecasting Newsletter: January 2021

Cool search engine for probabilities! Any chance you could add Hypermind?

Informational Lobbying: Theory and Effectiveness

Thanks for this!

FWIW, I'd love to see a follow-up review on lobbying Executive Branch agencies. They're less powerful than Congress, but often more influenceable as well, and can sometimes be the most relevant target of lobbying if you're aiming for a very specific goal (that is too "in the weeds" to be addressed directly in legislation). I found Godwin et al. (2012) helpful here, but I haven't read much else. Interestingly, Godwin et al. find that some of the conclusions from Baumgartner et al. (2009) about Congressional lobbying don't hold for agency lobbying.

Forecasting Newsletter: May 2020.

Thanks!

Some additional recent stuff I found interesting:

  • This summary of US and UK policies for communicating probability in intelligence reports.
  • Apparently Niall Ferguson’s consulting firm makes & checks some quantified forecasts every year: “So at the beginning of each year we at Greenmantle make predictions about the year ahead, and at the end of the year we see — and tell our clients — how we did. Each December we also rate every predictive statement we have made in the previous 12 months, either “true”, “false” or “not proven”. In recent years, we have also forced ourselves to attach probabilities to our predictions — not easy when so much lies in the realm of uncertainty rather than calculable risk. We have, in short, tried to be superforecasters.”
  • Review of some failed long-term space forecasts by Carl Shulman.
  • Some early promising results from DARPA SCORE.
  • Assessing Kurzweil predictions about 2019: the results
  • Bias, Information, Noise: The BIN Model of Forecasting is a pretty interesting result if it holds up. Another explanation by Mauboussin here. Supposedly this is what Kahneman's next book will be about; HBR preview here.
  • GJP2 is now recruiting forecasters.
Forecasting Newsletter: April 2020

The headline looks broken in my browser. It looks like this:

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The last explicit probabilistic prediction I made was probably a series of forecasts on my most recent internal Open Phil grant writeup, since it's part of our internal writeup template to prompt the grant investigator for explicit probabilistic forecasts about the grant. But it could've easily been elsewhere; I do somewhat-often make probabilistic forecasts just in conversation, or in GDoc/Slack comments, though for those I usually spend less time pinning down a totally precise formulation of the forecasting statement, since it's more about quickly indicating to others roughly what my views are rather than about establishing my calibration across a large number of precisely stated forecasts.

Load More