Along with my co-founder, Marcus A. Davis, I run Rethink Priorities. Previously, I was a professional data scientist.
Apparently you can just edit the tag, so I did!
"Scalably involving people" might be better
Worth flagging that we at Rethink Priorities have had no trouble finding many well-qualified candidates when we do our operations hiring.
Strong middletermism suggests that the best actions are exclusively contained within the set of actions that aim to influence how the next 137 years go (and not a year longer!)
We know that compromising between smart people is a good decision procedure (see "Aumann's agreement theorem" also see how ensemble models generally outperform any individual models). Given that many smart people support near-term causes and many smart people support longtermist causes, I suggest that the highest impact causes will be found in what I call middletermism.
Another important issue is that our predictive track record gets worse as a function of time - increasing time means increasing error. Insofar as we are trying to balance expected impact and robustness of impact calculations, this suggests a time at which error will balance out impact. In my calculations, this occurs exactly 137 years from now. Thus middletermism only focuses on these 137 years.
Rethink Priorities is pretty close to this! We've done message testing now for many orgs across cause areas... Centre for Effective Altruism, Will MacAskill, Open Phil, the Centre for the Study of Existential Risk, Humane Society for the United States, The Humane League, Mercy for Animals, and various EA-aligned lobbyists. We have a lot of skills and resources to do this well and already have a well-built pipeline for producing this kind of work.
We'd be happy to consider doing more work for other people in EA and the EA movement as a whole!
Is the Global Health and Development Fund still going to be just Elie for the foreseeable future? (Not that there's anything wrong with that.)
Why the secrecy around the identity of the guest managers?
I doubt it will ever be a standard procedure in every opinion piece.
Meaning you think there is a 95% chance that within five years, it won't be the case that The New York Times, The Atlantic, and The Washington Post will include a quantitative, testable forecast in at least one fifth of their collective articles?
...Just kidding. Thanks for the well-written and illuminating answer.
Why don't more journalists make concrete, verifiable, quantitative forecasts and then retrospectively assess their own accuracy, like you did here (also see more examples)? Is there anything that could be done to encourage you and other journalists to do more of that?
Similar to "Effective Altruism is Not a Competition"