Odyssean Institute

@ Odyssean Institute
52 karmaJoined Nov 2023
www.odysseaninstitute.org

Bio

The Odyssean Institute (OI) is a research, advocacy, and experimental think tank founded to combine complexity modelling approaches, expert elicitation of judgement, and deliberative mechanisms to enable more comprehensive, epistemic, and legitimated policy making around resilience, existential, and catastrophic risks.

Posts
1

Sorted by New

Comments
4

Hi Will,

What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks. 

Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep Uncertainty (developed at RAND), and a deliberative democratic jury or assembly. In our White Paper on the Odyssean Process, we outlined how their combination would be a great contribution to avoid short termist thinking in policy formulation around GCRs. We’re happy to see yourself and Open AI taking a keen interest in this flourishing area of deliberative democratic governance! 

We are highly encouaged by the fact that you see it “of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment.” Despite this, the work remains neglected even within EA and thus would benefit from greater focus and support for more resources to be allocated to it. We’d welcome a chance to discuss this in a more in depth way with you and others interested in supporting it.

Thank you James for such a thorough response! Always pleased to see recognition of the still neglected potential of political technologies and existing best practices. 

1. We aim to do aspects of the modelling ourselves, but by identifying in literature review existing open access models rather than building them from the data directly, time can be saved. An assembly can be conducted over a few months, as can the modelling be undertaken simultaneously with effective resourcing to parallelise phases of the Process, which is not unusual for a large policy initiative of any kind. I would emphasise that time is not the bottleneck when deciding how, from a meta- or strategic planning level, how to proceed. It is about building capacity so you are ready in future to act quickly. Badly and well prepared administration can both act quickly; the difference in what they chose to have ready is what we are aiming for with this work. 

2. No one has combined all three components yet, however we want to test this because we see each success story and underlying logic of the components as mutually reinforcing. Some syntheses of parts of this exist: horizon scanning and public deliberation were advocated for in Kemp et al's Climate Endgame: https://www.pnas.org/doi/10.1073/pnas.2108146119 as well as numerous examples of DMDU (the Room for the River initiative for strategic planning of the Dutch Delta going over 40 years into the future) and participatory processes being reviewed at this year's DMDU 2023 conference. For example, DMDU is a fundamentally different ontology for modelling; rather than trying consolidative modelling (which is legitimate, but is also often used to confirm biases or flatten a complex space), DMDU accepts that we need to explore many possible futures to increase the likelihood of being prepared for uncertainty. It also works when there are deeper disagreements on what theories or uncertainties to account for:

Citations useful for exploratory modelling  include: 

- https://www.jstor.org/stable/171847 

- https://www.sciencedirect.com/science/article/pii/S2214629622001876 

-  https://forum.effectivealtruism.org/posts/kCBQHWqbk4Nrns8P7/model-based-policy-analysis-under-deep-uncertainty

3. I would liken the time constraints as pertaining to the tactical level problem, which is still hindered or enabled by the quality of a pre-emptively prepared 'strategic' resilience plan. In short, due to these uncertainties, the point isn't deciding from scratch as the risk is occurring - it is having done enough horizon scanning, scenario planning, and invested enough in resilient infrastructure, backups, failsafes, public education etc. before the time so that a faster dynamic response, as with DAPPs, can occur and be iterated (think nuclear war infomercials from the Cold War - if best practice wasn't established for surviving a blast, and publicly communicated, there wouldn't be time to figure this out rapidly on the day). If we don't put in the time, work, and focus that something like our Process does before time, there will be no easy way to suddenly decide quickly because the ground wouldn't have been prepared. Similarly, Bayesian inference is only as good as it's priors, and falls apart with complex low probabilities collapse events where we had very little data to learn from Taleb N's Statistical Consequences of Fat Tails: https://arxiv.org/ftp/arxiv/papers/2001/2001.10488.pdf

Collective intelligence, and broad and diverse sets of experts - not homogenous or narrow sets - have something to contribute to developing those models. As do the public for developing something that is viable for enforcement, not something likely to lead to social unrest if attempted. As such setting a values driven strategy that can be adjusted with different operational means when events hit is the goal, and as such time is not the only or primary bottleneck. The quality of preparations - which have also been sorely lacking as seen in Britain with the Covid-19 inquiry - is vital. Taiwan had prior experience with SARS, but also an effective test and trace system built up after that. Britain still does not - with better foresight, the need for resilience and robust decision making is seen more clearly, as we might now invest better for the next pandemic. The problem wasn't only time, it was political partiality (which can be mitigated somewhat through a wider sample of the public, rather than a slim and vested interest beholden political party) and it was a lack of transparency that also allowed for abysmal prioritisation to stay hidden. Matt Boyd's anticipatory governance paper here is solid on a range of wider recommendations for good existential and catastrophic risk policy making, including deliberation and transparency taken as essential (built from a wide sample of expert elicitation and examples of successful governance): https://ojs.victoria.ac.nz/pq/article/view/7313/6467

As to your last point, there is even more of a moral component to x-risks than perhaps issues of obvious social cleavage; it is a fundamental misstep to assume that how we prioritise, who we identify as more or less vulnerable, and what we want to invest in or pursue despite their risks, is not a moral question. Brushing them under the carpet and assuming we can technocratically address them is not viable. I would again refer here to a Matt Boyd paper on the need for deliberation on values, because even the 'how' questions carry deeper 'why' aspects to them: https://philarchive.org/rec/BOYERN

Rounds of iterated expert, modelling, and citizen deliberation can help price in exactly those externalities. This is why we don't just say 'let's save time and do just a deliberation, or let's make it more technical and do just a horizon scan' - for these issues, by taking a similar amount of time as a longer policy consultation but with broader expert, generalist, complexity modelling, and finally public collective intelligence, we ensure there are phased opportunities to consider multiple levels of knowledge and values. For the greatest risks, we think this is a huge bargain for avoiding stumbling into them or failing to enforce their mitigation due to flawed, secretive, and self-sabotaging approaches to the problem spaces.

Thank you for a thoughtful response! Indeed, we have considered these risks and although for the sake of brevity haven't delved heavily into the range of experimental designs for an assembly in the White Paper directly, we have in conversations with strategic partners such as Missions Publiques. We agreed that a model similar to theirs on certain assemblies would be wise. This involves the public deliberating in isolation first, so they aren't overly primed by the horizon scan, before then being introduced to the findings of the panel afterwards. This allows for iterations in the Process, without overly influencing initial values and considerations from the public. So for example, the public would be consulted, help to sculpt the optimalities scan in DMDU, and then incorporate the EEJ panel's findings to refine and deepen engagement. Ultimately the assembly decides, so we are aware of the need to balance these steps to ensure they support rather than subvert this aspect.

DMDU has a considerable emphasis on translating findings effectively, and avoiding getting bamboozled by models (such as the emphasis Erica Thompson puts on caution around this in 'Escape from Model Land'). It is a positive sign that DMDU practitioners are well aware of the 'fallacy of misplaced concreteness' and the risks this poses, and a large part of their methodology is devised to keep this explicit. The education phase of an assembly would also involve familiarising participants carefully with the value and limits of the models used, with ranges of uncertainties. It also bears noting that while not all questions will require modelling, done carefully and translated with caution, certain civilisational risks will need this level of rigour. 

Hi Mathias,

To see the Process itself, Page 16 has a diagram following the tables outlining each component of it, and subsequent pages have the commentary. 

You’re broadly accurate in your proposed case study of the metro in the form of the process, in that our Process for this problem would entail horizon scanning key uncertainties or trends in metro line design, such as comparative analysis of successful metro redesigns with measurable successes. This is then presented to the 100 citizens through an iterative process of identifying their values, possible solutions, uncertainties, and then using decision making under deep uncertainty (DMDU) to coproduce actionable pathways that fulfil their multiple criteria.

However, due to the nature and involved aspect of the process, it is geared explicitly towards challenging, or wicked problems, and existential risks or GCR - rather than simpler or more trivial policy issues.

We also have an abstract version of this process in the ‘Combining the Pictures’ section. In short - horizon scan a complex issue or trends, enable deliberation by a wider sample using this, and iterate using DMDU to facilitate finding the win-wins within the solution space that may have been neglected, increasing the tractability of the eventual recommendations. We don't want to pick too specific a use case as we see a great value in the generalisability of this across cause areas.

Furthermore, in our commentary on the process, we cite a few concrete examples of deliberation, DMDU, and EEJ and where they have been used, with citations to read further on their applications. Some examples include the Dutch Delta Commissioner's work for DMDU, Irish, Taiwanese, and American uses of deliberation, and the WHO's uses of expert elicitation and horizon scanning, as well as biorisk and ecological cases. We had to lean a little on brevity due to the range of components involved, so ideally the citations can furnish further detail where we couldn't due to length considerations.

Finally, in the Our Plans section, we also cite the Myriad-EU pilots that used multi-level multi-risk assessments that were conducted with DMDU to address more of the typical complex risk and systemic risk areas we’d look to contribute towards. Hope this helps!