7 karmaJoined Dec 2020


Thanks for writing this up, it's very useful!

I'm curious about model 3 - the policy evaluation model.

I think this point is particularly insightful: "Conditional forecasting would also require policymakers to identify discrete and falsifiable goals of their policies, which would already be a major process improvement."

But I don't quite understand the thinking behind the following two points:

  • "generating discrete probabilities about the likely success of certain policy tools would incentivize decision-makers to engage with the logic underlying relevant forecasts." - how exactly do you see this model changing the incentives policymakers face, relative to the status quo (which includes conditional forecasts sometimes being generated on the likes of Metaculus etc)?

  • "It would also provide the foundations for more active learning in the policymaking sphere: policymakers would be able to improve their policymaking skills by studying which of their interventions succeeded and failed." - what's the process you envision here that enables active leasing? If policymakers themselves are the ones that are making forecasts, and can see how their predictions compare to actual outcomes, then I can see where the learning comes in. But if policymakers are still just consumers of forecasts in this model, I don't see how the supply of conditional forecasts would itself support policymakers' learning.

Thanks in advance for any additional detail you can provide on this proposal!

Exciting stuff, thanks for the post!

If possible, could you expand on this bit from idea 2? "Note that I think existing prediction and evaluation setups are currently not ready to do this well. Among others, we need a) better engineering setups to do forecasting at scale, and b) better ontologies for cleaner evaluations at scale"

In particular, what do you see as the scale-limiting characteristics of platforms like Metaculus? Lack of incentives, or something else?

And what do you mean by "better ontologies for cleaner evaluations"? (E.g. describing an existing ontology and its limitations would be helpful)


I recently came across this work by researchers at the University of Melbourne: https://www.tandfonline.com/doi/full/10.1111/ajpy.12229

They propose 'enlightened compassion' as a distinct personality factor which seems to me to be pretty similar to the 'expansive altruism' construct mentioned in this forum post. The Melbourne Uni researchers find that 'enlightened compassion' is related to a combination of Agreeableness and Openness to Experience.

Answer by dilhanpereraApr 11, 202120

Great question! These are just some initial instincts - I'm not sure any of these questions are overly neglected, and they are too broad to be research questions, but curious to hear what others think:

  • How can we [most effectively - implicit in all questions below] expand individuals' moral circles (including over temporal dimensions), particular during key choice points (e.g. choosing what to eat, whether/where to donate, what/who to vote for)?
  • How can we improve political decision-making (i.e. ensuring political choices are consistent with individuals' values)?
  • How can we improve judgement and decision-making under uncertainty (including deep uncertainty) more generally? And do these differ for individuals vs. small groups vs. large groups/organisations?
  • How can we ensure individuals and groups behave more consistently with their decisions/intentions?
  • How can we increase effective democratic behaviours more generally (i.e. behaviours outside of well-defined political decisions that are generally regarded to contribute towards a well-functioning democracy)?
  • How can we increase cooperation between individuals and groups, particular individuals and groups that are distant (physically, temporally, socially, etc)?