lukeprog

Comments

Informational Lobbying: Theory and Effectiveness

Thanks for this!

FWIW, I'd love to see a follow-up review on lobbying Executive Branch agencies. They're less powerful than Congress, but often more influenceable as well, and can sometimes be the most relevant target of lobbying if you're aiming for a very specific goal (that is too "in the weeds" to be addressed directly in legislation). I found Godwin et al. (2012) helpful here, but I haven't read much else. Interestingly, Godwin et al. find that some of the conclusions from Baumgartner et al. (2009) about Congressional lobbying don't hold for agency lobbying.

Forecasting Newsletter: May 2020.

Thanks!

Some additional recent stuff I found interesting:

  • This summary of US and UK policies for communicating probability in intelligence reports.
  • Apparently Niall Ferguson’s consulting firm makes & checks some quantified forecasts every year: “So at the beginning of each year we at Greenmantle make predictions about the year ahead, and at the end of the year we see — and tell our clients — how we did. Each December we also rate every predictive statement we have made in the previous 12 months, either “true”, “false” or “not proven”. In recent years, we have also forced ourselves to attach probabilities to our predictions — not easy when so much lies in the realm of uncertainty rather than calculable risk. We have, in short, tried to be superforecasters.”
  • Review of some failed long-term space forecasts by Carl Shulman.
  • Some early promising results from DARPA SCORE.
  • Assessing Kurzweil predictions about 2019: the results
  • Bias, Information, Noise: The BIN Model of Forecasting is a pretty interesting result if it holds up. Another explanation by Mauboussin here. Supposedly this is what Kahneman's next book will be about; HBR preview here.
  • GJP2 is now recruiting forecasters.
Forecasting Newsletter: April 2020

The headline looks broken in my browser. It looks like this:

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The last explicit probabilistic prediction I made was probably a series of forecasts on my most recent internal Open Phil grant writeup, since it's part of our internal writeup template to prompt the grant investigator for explicit probabilistic forecasts about the grant. But it could've easily been elsewhere; I do somewhat-often make probabilistic forecasts just in conversation, or in GDoc/Slack comments, though for those I usually spend less time pinning down a totally precise formulation of the forecasting statement, since it's more about quickly indicating to others roughly what my views are rather than about establishing my calibration across a large number of precisely stated forecasts.

Forecasting Newsletter: April 2020

Note that the headline ("Good Judgement Project: gjopen.com") is still confusing, since it seems to be saying GJP = GJO. The thing that ties the items under that headline is that they are all projects of GJI. Also, "Of the questions which have been added recently" is misleading since it seems to be about the previous paragraph (the superforecasters-only questions), but in fact all the links go to GJO.

Forecasting Newsletter: April 2020

Nice to see a newsletter on this topic!

Clarification: The GJO coronavirus questions are not funded by Open Phil. The thing funded by Open Phil is this dashboard (linked from our blog post) put together by Good Judgment Inc. (GJI), which runs both GJO (where anyone can sign up and make forecasts) and their Superforecaster Analytics service (where only superforecasters can make forecasts). The dashboard Open Phil funded uses the Superforecaster Analytics service, not GJO. Also, I don't think Tetlock is involved in GJO (or GJI in general) much at all these days, but GJI is indeed the commercial spinoff from the Good Judgment Project (GJP) that Tetlock & Mellers led and which won the IARPA ACE forecasting competition and resulted in the research covered in Tetlock's book Superforecasting.

Insomnia with an EA lens: Bigger than malaria?

I wrote up some thoughts on CBT-I and the evidence base behind it here.

Information security careers for GCR reduction

Is it easy to say more about (1) which personality/mindset traits might predict infosec fit, and (2) infosec experts' objections to typical GCR concerns of EAs?

Rethink Priorities 2019 Impact and Strategy

FWIW I was substantially positively surprised by the amount and quality of the work you put out in 2019, though I didn't vet any of it in depth. (And prior to 2019 I think I wasn't aware of Rethink.)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

FWIW, it's not clear to me that AI alignment folks with different agendas have put less effort into (or have made less progress on) understanding the motivations for other agendas than is typical in other somewhat-analogous fields. Like, MIRI leadership and Paul have put >25 (and maybe >100, over the years?) hours into arguing about merits of their differing agendas (in person, on the web, in GDocs comments), and my impression is that central participants to those conversations (e.g. Paul, Eliezer, Nate) can pass the others' ideological Turing tests reasonably well on a fair number of sub-questions and down 1-3 levels of "depth" (depending on the sub-question), and that might be more effort and better ITT performance than is typical for "research agenda motivation disagreements" in small niche fields that are comparable on some other dimensions.

Load More