lukeprog

lukeprog's Comments

Forecasting Newsletter: April 2020

The headline looks broken in my browser. It looks like this:

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The last explicit probabilistic prediction I made was probably a series of forecasts on my most recent internal Open Phil grant writeup, since it's part of our internal writeup template to prompt the grant investigator for explicit probabilistic forecasts about the grant. But it could've easily been elsewhere; I do somewhat-often make probabilistic forecasts just in conversation, or in GDoc/Slack comments, though for those I usually spend less time pinning down a totally precise formulation of the forecasting statement, since it's more about quickly indicating to others roughly what my views are rather than about establishing my calibration across a large number of precisely stated forecasts.

Forecasting Newsletter: April 2020

Note that the headline ("Good Judgement Project: gjopen.com") is still confusing, since it seems to be saying GJP = GJO. The thing that ties the items under that headline is that they are all projects of GJI. Also, "Of the questions which have been added recently" is misleading since it seems to be about the previous paragraph (the superforecasters-only questions), but in fact all the links go to GJO.

Forecasting Newsletter: April 2020

Nice to see a newsletter on this topic!

Clarification: The GJO coronavirus questions are not funded by Open Phil. The thing funded by Open Phil is this dashboard (linked from our blog post) put together by Good Judgment Inc. (GJI), which runs both GJO (where anyone can sign up and make forecasts) and their Superforecaster Analytics service (where only superforecasters can make forecasts). The dashboard Open Phil funded uses the Superforecaster Analytics service, not GJO. Also, I don't think Tetlock is involved in GJO (or GJI in general) much at all these days, but GJI is indeed the commercial spinoff from the Good Judgment Project (GJP) that Tetlock & Mellers led and which won the IARPA ACE forecasting competition and resulted in the research covered in Tetlock's book Superforecasting.

Insomnia with an EA lens: Bigger than malaria?

I wrote up some thoughts on CBT-I and the evidence base behind it here.

Information security careers for GCR reduction

Is it easy to say more about (1) which personality/mindset traits might predict infosec fit, and (2) infosec experts' objections to typical GCR concerns of EAs?

Rethink Priorities 2019 Impact and Strategy

FWIW I was substantially positively surprised by the amount and quality of the work you put out in 2019, though I didn't vet any of it in depth. (And prior to 2019 I think I wasn't aware of Rethink.)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

FWIW, it's not clear to me that AI alignment folks with different agendas have put less effort into (or have made less progress on) understanding the motivations for other agendas than is typical in other somewhat-analogous fields. Like, MIRI leadership and Paul have put >25 (and maybe >100, over the years?) hours into arguing about merits of their differing agendas (in person, on the web, in GDocs comments), and my impression is that central participants to those conversations (e.g. Paul, Eliezer, Nate) can pass the others' ideological Turing tests reasonably well on a fair number of sub-questions and down 1-3 levels of "depth" (depending on the sub-question), and that might be more effort and better ITT performance than is typical for "research agenda motivation disagreements" in small niche fields that are comparable on some other dimensions.

Load More