[October 2022 edit: See here for our current progress.]

This project is seeking a director. 

 

Motivation

See Hinges and crises for expanded explanation.

The hinge of history

So far, long-termist efforts to change the trajectory of the world focus on far-off events. This is on the assumption that we foresee some important problem and influence its outcome by working on the problem for longer. We thus start working on it sooner than others, we lay the groundwork for future research, we raise awareness, and so on. 

Many longtermists propose that we now live at the “hinge of history”, usually understood on the timescale of critical centuries, or critical decades. But ”hinginess” is likely not constant: some short periods will be significantly more eventful than others. It is also possible that these periods will present even more leveraged opportunities for changing the world’s trajectory.

These “maximally hingey” moments might be best influenced by sustained efforts long before them (as described above). But it seems plausible that in many cases, the best realistic chance to influence them is “while they are happening”, via a concentrated effort at that moment. Some reasons for this: there are decreased problems with cluelessness; an increase in the resources spent on the problem; actual decisions being made by powerful actors, and so on.

Crises as times of opportunity

Even if a specific crisis is not a “hinge of history”, crises often bring opportunities to change the established order. For example, policies well outside the Overton window can suddenly become live options, and intellectual development in disciplines and technologies (think of the sudden intense focus on reproducibility in epidemiology). These effects often persist for decades after the initial event (think of taking off your shoes at the airport; think of face masks in post-SARS countries) - and so shaping the response is of interest to longtermists.

How to impact hinges or crises

Acting effectively during hinges or crises may depend on factors such as "do we have a relevant policy proposal in the drawer?", "do we have a team of experts able to advise?" or “do we have a relevant network?” It is possible to prepare these! 

This document proposes the creation of an “emergency response team” as one sensible preparation. 

We tested the above principles during the COVID pandemic, by launching a good-sized research and policy effort, Epidemic Forecasting (EpiFor). We had some success, with associated members advising legislators and international bodies and the associated research getting into top journals and so reaching millions of people. 

Some takeaways:

  • There are various paths to impact. COVID illustrated an opportunity for small teams: there are worlds where what's needed is being able to think clearly and do research really fast.
  • Often our main bottleneck was project managers, particularly people with both research skills and PM skills. Also EpiFor was only effective because core members already knew each other from FHI RSP or CZEA. And we could have been much more effective, if the team had "trained" together before COVID, to sort out differences in management styles, communication, commitment, etc.
     

How to improve longtermists’ emergency response capabilities

Is it possible to fund, train, and sustain a longtermist response team in the absence of a current emergency? Perhaps - but for people we most want on board, the opportunity costs of being thus “benched” might be too high.

A more viable alternative is a reserve, or standing army: a team of researchers and managers who are “on call” for a future emergency, undergoing annual wargaming or similar refreshers to maintain readiness. 
 

What the team ideally should have:

  • existing expertise in many object-level domains
  • ability to decompose problems into delegable parts
  • structural capital (legal backing, processes, and roles in place beforehand)
  • credibility or the ability to signal credibility 
     

ALERT: a rough specification 

(Active Long-termist Emergency Response Team)

What do we need?

  • people
  • training
  • an institution with lots of structural capital
  • credibility or the ability to signal credibility 
  • a clear trigger
     

People

We propose to gather 30 to 50 “reservists”: people who are able to leap to respond to an emergency. They need to commit to fast activation - something like “given 2 days notice, I am able to switch to this work with >90% probability”. 

Together they should cover the usual object-level domains, and also policy, ops, and narrower domains, e.g. regulators, hardware engineers, clinical trial PIs, lawyers and specialised counsel, NGO managers and directors, media.

We then create some structure - perhaps they form teams of 5-10 people centring on one domain, or one vertical.

Key abilities: <learn anything fast>, generalist research, stats, data science, ML, bio, policy, international relations,...

  • domains: bio, medical, ML, policy, international relations, meteorology/disaster, nuclear/chemistry/weapons experts, …
  • general abilities: managerial (project, personal/team), communication (soc. networks, policy makers, journalists), presentation/communication, data science (ingress, presentation online, dataset distribution, SW ops), organizing volunteers (for data collection etc.), statistics and modeling, forecasting (directly, creating questions, outreach), finance/accounting, opsec(?)

These requirements are quite severe, but we think there are enough people. Recently more EA projects have involved consulting, modelling, commercial forecasting, founding organizations, etc., and thus more of us have relevant experience. The obvious categories are academics, consultants, and the self-employed. Also, PhD students are a powerful “reserve army of labour”, since (given advisor approval) they have latitude to switch projects at short notice, often without extra funding. Some grantees may also be interested.
 

Institution

We create a "dormant" institution. It is incorporated, legally up-to-date, and able to receive and send money, hire people, etc. This could take the form of a series of NGOs under US, EU, and some other country's jurisdiction. (The teams can be hosted also in some existing institutions.) It has some amount of liquid capital in its account. It has managers on call and a roster of reservists. It has someone actively watching the world, ready to activate the relevant team when a clearly specified risk threshold and relatively low is reached.

Strong leadership is just as important as having experts. In particular, we need the ability to decompose problems into delegable parts, and more generally we need "structural reserves": all the processes and tools to maintain readiness and absorb more people when we need to. This structure includes coordinators for reservists and volunteers, and an oversupply of managers.

Readiness

The institution organises annual readiness exercises, in sprints of 7-14 days. This brings the team together, to work either on a toy problem or wargame, or on some current minor instance of the problem.

The permanent ALERT staff will track what's going on with members, collecting regular state updates. We could coach reservists towards valuable skills (especially more general ones). 

We could keep a live record of the team's collective network: all the contacts with experts, policymakers, funders, regulators, academia, and influential nodes. (The network is vital to supply credibility, without which the teams cannot influence policy.)

 

Subproject: list of EAs willing to lend their expertise

As well as the reservists, we should have a longer list of people who can help in some way / candidate reservists.

This list would be private (but that it exists would be public or semi-public, as well as the criteria involved).  People who may be willing and able to assist in their domain in a crisis or sufficiently impactful/urgent project

 

What emergencies?

Where can this team help? Where should they? Where shouldn’t they?
 

  • The bar should be high for activating ALERT;
  • The main value of the project may well come from presently unexpected events.
    • But they may be correspondingly hard to identify and coordinate on 
      (e.g. does a diplomatic proposal to loosen a nuclear treaty, where there is some chance for intervention via modelling and scientific argumentation with some parties, qualify?)
       

Real-world and hypothetical examples

Good examples

  • The Covid-19 pandemic - e.g. early scenario forecasting and publishing information, consultancy for organisations / govts, research and modelling, maintaining datasets (e.g. countermeasures OxGCTR is still very buggy today), developing tools, advising policymakers
  • RAMP - an ad-hoc UK academic effort to review hundreds of COVID preprints within days of their release, to aid the civil service in processing the wave of evidence and pseudoevidence.
  • Massive geomagnetic storm

Uncertain 

  • A rapid vaccine development program (for official/wide deployment) - depends on the timing and possible impact, but the bulk of work in deploying medications seems to be in handling regulatory aspects (incl. research aspects), having narrow-scoped seniority and expertise, manufacturing & supply chain, logistics, deals with target govts etc. - while it may make sense for some EAs to engage, the bulk of work is likely outside of ALERT scope.
     

Negative examples

  • Humanitarian crises after local/regional natural disasters 
    (EAs do not have much of a comparative advantage, the skills required are different, and there are many orgs already specialising in this.)

What next?

The first role to fill is the secretary, the person who watches the world, maintains the basic institutional requirements, organises the readiness exercises, etc. Please apply here.

See also


This post is intended to be readable as stand-alone, but also forms part of a series explaining my part in the EA response to COVID, my reasons for switching from AI alignment work for a full year, and some new ideas the experience gave me. We co-wrote it with Gavin Leech. Thanks to Nora Amman, Ben Pace, Max Dalton and Tomáš Gavenčiak for helpful comments on the draft.

244

New Comment
47 comments, sorted by Click to highlight new comments since: Today at 10:52 PM

This is a side-note, but I dislike the EA jargon terms hinge/hingey/hinginess and think we should use the term "critical juncture" and "criticalness" instead. This is the common term used in political science, international relations and other social sciences. Its better theorised and empirically backed than "hingey", doesn't sound silly, and is more legible to a wider community.

Critical Junctures - Oxford Handbooks Online

The Study of Critical Junctures - JSTOR

https://users.ox.ac.uk/~ssfc0073/Writings%20pdf/Critical%20Junctures%20Ox%20HB%20final.pdf 

https://en.wikipedia.org/wiki/Critical_juncture_theory 

I think this is a very cool idea!

To offer some examples of similar things that I've been involved in - the trigger has often been some new regulatory or legislative process. 

  • "woah the EU is going to regulate for AI safety ... we should get some people together to work out how this could be helpful/harmful, whether/how to nudge, what to say, and whether we need someone full-time on this" -> here
  • "woah the US (NIST) is going to regulate for AI safety..." -> here
  • "woah the UK wants to have a new Resilience Strategy..." -> here
  • "woah the UK wants to set up a UK ARPA..." -> here
  • "woah the UN is redoing the Sendai Framework for Disaster Risk Reduction? It would be cool to get existential risk in that" -> here, from Clarissa Rios Rojas

This is the kind of reactive, cross-organisational, quick response you're talking about. At the moment, this is done mostly through informal, trusted networks. Could be good to expand this, have a bigger set of people willing to jump in to help on various topics. The list seems most promising on that regard.

Other organisations:

  • CSET was in some ways a response to "woah the conversation around AI in DC is terrible and ill-informed" - a kind of emergency response.
  • FLI have been good at taking advantage of critical junctures through e.g. their huge Open Letters.
  • ALLFED has a rapid response capability, they wrote about it here. Having a plan, triaging, and bringing in volunteers seem like sensible steps.
  • Some of the monitoring work being done full-time (not by volunteers) in DC, London and Brussels seems especially useful for raising the alert to others.

Finally, CSER's Lara Mani has been doing some really cool stuff around scenario exercises and rapid response - like this workshop. For example, she went to Saint Vincent to help with the evaluation of their response to the eruption of La Soufrière (linked to her work on volcanic GCR). She also co-wrote: When It Strikes, Are We Ready? Lessons Identified at the 7th Planetary Defense Conference in Preparing for a Near-Earth Object Impact Scenario. Basically, I think exercises could be really useful too.

Just thinking out loud, natural triggers in the longtermist biosecurity space (where I'm by far most familiar) would be:

  1. a disease event or other early warning signal from public health surveillance
  2. new science & tech development in virology/biotech/etc
  3. shifts in international relations or norms relevant to state bioweapons programs
  4. indications that a non-state group was pursuing existentially risky bio capabilities

... anything else?

On my end, the FLI link is broken: https://futureoflife.org/category/laws/open-letters-laws/

I'd be really happy to see this get off the ground. I tried to run something a bit like this to work out how individuals and orgs should respond to the risk of nuclear war recently, and was pretty worried about wasting people's time if it didn't turn out to be useful or important, failing to request enough time from people if it turned out to be extremely important, how to find researchers, etc.

I'm also very excited about this. Let me know if I or the community health team can help support this get off the ground!

I'm curating this, even though it's not as recent (we didn't have curation in April). I think it's an important project that didn't get discussed enough when it was posted.  It seems like every time something happens (e.g. a serious global event like a pandemic or a major war), we scramble to understand what to do; this seems like a failure of planning and coordination, and I'm excited about the existence of more projects that try to prepare effectively. 

Quick update since April:

  • We got seed funding.
  • We formed a board, including some really impressive people in bio risk and AI.
  • We're pretty far through hiring a director and other key crew, after 30 interviews and trials.
  • We have 50 candidate reservists, as well as some horizon-scanners with great track records. (If you're interested in joining in, sign up here.)
  • Bluedot and ALLFED have kindly offered to share their monitoring infrastructure too.
  • See the comments in the job thread for more details about our current structure.

 

Major thanks to Isaak Freeman, whose Future Forum event netted us half of our key introductions and let us reach outside EA.

Similar to others, I want to ask basically if the board also includes people with expertise other than biorisk and AI, like geopolitics. Since the post was part of a series on COVID (which was not thought to be an existential threat at any point), I had imagined the CRT to also be intended to respond to crises not in the AI/biorisk areas.

Yeah, we're still looking for someone on the geopolitics side. Also, Covid was a biorisk. 

...looking for someone on the geopolitics side

Cool!

Also, Covid was a biorisk

Yes, but it wasn't an existential biorisk. And I assume once you include risks which are catastrophic but not existential, you also get things which aren't AI/pandemics. So that's what I was trying to say.

We will activate for things besides x-risks. Besides the direct help we render, this is to learn about parts of the world it's difficult to learn about any other time.

Yeah, we have a whole top-level stream on things besides AI, bio, nukes. I am a drama queen so I want to call it "Anomalies" but it will end up being called "Other".

Some of this text is suggesting a vision different than what I expected and I have questions. What would an alert org for AI look like?


Partially the reason I'm writing is that there was a vision for another org that looks similar, but has a different form. This org would respond much more directly to crises like Afghanistan or Ukraine. It would harness sentiment and redirect loose efforts to much more effective, coordinated activity, producing a large counterfactual increase in aid and welfare.

I'm guessing the vision for this org is probably is more along the lines of what most people on the forum are thinking for a rapid response organization.

In this org that mobilizes efforts effectively, the substantive differences are:

  1. The competencies and projects are distinct from past EA competencies and projects (quick decisions in noisy environments, organizing hundreds of people with feedback loops in hours and drawing on a lot of local competence)
  2. The amount of work and (fairly) tangible output would build trust and create a place to recruit talent, including very strong candidates  who are effective/impressive in different competencies. 
    1. This has deeper strategic value in building EA, especially in regions/countries where it isn't established and where community building efforts have difficulty.
  3. Created and supported by EAs, it would have a lot of real world knowledge and provides a very strong response to EA being esoteric.

A major theme of this org is proactive work, avoiding reactions to emergencies, but preparing plans and resources in advance, when a much smaller amount of resources can be much more impactful, or even reducing the size of a crises altogether. Socializing and executing this proactive viewpoint provide a great way to communicate EA ideas.

The reason this org wasn't written up or executed (separate from time constraints), was that the org would demand a lot of attention (it's easy to get running nominally but quality of leadership and decisions is important; the resulting activity/size of people involved is large and difficult to control and manage; many correct decisions seem unpopular and difficult to socialize; it needs to accommodate other viewpoints and pressure, including from very impressive non-EA leaders). This demand for executive attention made it less viable, but still above most other projects.

Another reason is that creating this org might be harder as some of this is harder to socialize to EAs  and take plenty of focus (it's sort of hard to explain, as there's not that many templates for this org; momentum from some sort of early networking exercise of high status EAs has less value and is harder to achieve; initial phases are delicate, tentative investment won't attract the kind of talent needed to drive the organization).

Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus. 

So a project that tags the top EA interests of "AI" and "biorisk" is valuable (or extremely valuable by some worldviews), but doesn't seem like it would have the same form as what was described above, e.g.: 

  • It seems like you're advising and directing national decisions. It seems like a bit of a "pop-up" think tank? This is different than the vision above.
  • It seems hard and exploratory to do this alert org for AI.

Both of these traits results in a very different org than what was described above.

 

Do you have any comments? 

  • For example, does the org described above make any sense?
    • Do you think there is room for this org? 
  • (For natural reasons, it's unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?

Yeah we're not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.

Your comment isn't a reply and reduced clarity. 

This is bad, since it's already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.

 

Yeah we're not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.

ALLFED has a specific mission that doesn't resemble the org in the parent comment. SHELTER isn't an EA org, it provides accommodations for UK people? It's doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations. 

To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and  preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a "Task-Y" and on ramp talent to EA, who can be impressive and non-diluting. 

 

It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.

 

Earlier this year, I didn't writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.

I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don't think it's reasonable to poke holes because of this or the fact it's exploratory, but it's pretty clear this isn't in the space described, which is why I commented.

Thank you for this work. I recommend updating the post.

Do you have plans for nuclear war, given that's the most likely GCR right now?

We're not really adding to the existing group chat / Samotsvety / Swift Centre infra at present, because we're still spinning up. 

My impression is that Great Power stuff is unusually hard to influence from the outside with mere research and data. We could maybe help with individual behaviour recommendations (turning the smooth forecast distributions of others into expected values and go / no-go advice).

Anyone thinking about this?

A kessler syndrome of sufficient severity prevents spacecraft from leaving Earth for, depending on its' duration, centuries to millennia.

A kessler cascade will eventually result in such a syndrome, it's only a question of time, and this can be estimated by looking at the slope on the graph of the rate of debris increase from collisions. This slope is easy to increase and hard to decrease.

Starlink has a lot of small satellites in orbit.

Starlink is carrying communications for a party to a terrestrial conflict, these may include military communications.

A different party to the conflict, wishing to deny use of the constellation for the military communications of its' enemy, may take actions to degrade or destroy the constellation in the course of the war.

Are there classes of action that would sufficiently degrade Starlink, such that it is no longer suitable for use as a communications platform for the party to the conflict, which would lead to a near term kessler syndrome?

Optimistic: No, there's no risk to manned spaceflight of any action that could be taken against the Starlink constellation, including kinetic destruction of its' spacecraft in their current locations.

Pessimistic: any damage to the constellation or its' control systems results in an immediate kessler syndrome, which prevents manned spacecraft from ascending to the high (or escape) orbits required to colonize the solar system.

SpaceX engineers should be able to definitively answer this question.

In the most pessimistic case, the kessler syndrome will outlive terrestrial energy resources and/or climate reserve, so the human race will end starving, buried in our waste.

Yeah could be terrible. As such risks go it's relatively* well-covered by the military-astronomical complex, though events continue to reveal the inadequacy of our monitoring. It's on our Other list.

* This is not saying much: on the absolute scale of "known about" + "theoretical and technological preparedness" + "predictability" + "degree of financial and political support" it's still firmly mediocre.

Russian arms control officials have now made public statements suggesting that commercial space infrastructure that is used to support the conflict may be a legitimate target.

EA did the analysis on alienating billionaires, so nobody is going to mock a US billionaire who wants to colonize space, but deployed a commercial sat swarm that is now being talked about as a valid military target.

I'm guessing nobody funded by EA is putting the work in from an engineering standpoint to see if there's an existential risk there.

There are no new physics required, just engineering analysis. An engineer at a relevant firm could answer the questions. What breaks their system, how much debris does that course of action generate, is their constellation equipped to avoid cascading failure due to debris, what would be the impact on launch windows for high orbits of the worst case scenario?

I guess it has been done already and everything is totally fine, let's focus on other stuff, no need to call this an emergency.

Go run it, I'd read it.

I guess it has been done already and everything is totally fine,

 

Right, just like there's no  cause for concern about the human health impacts of living on Mars for awhile, I should just wait for my space ticket to go join the colonies, assuming my ship makes it through the building wall of space debris orbiting the planet

FYI: I think I signed up as a reservist but I'm not totally sure. I've not heard anything from you by email, so I just signed up again.

Got you! Pardon the delay, am leaving confirmations to the director we eventually hire.

Nice update. Given that you've just been curated, suggest you edit the OP to add this update or link to this comment.

Been trying! the editor doesn't load for some reason.

Maybe a client-side content blocker on your end? Works fine for me today.

I think RP can function well as an emergency response team, at least on the research side. For example, we did move five researchers into full time covid work in March 2020 and then kept just one researcher on it for another six months after we didn't find tractable opportunities that were better than our normal work. But I think this shows how we can and will pivot as needed and this flex capacity seems really good to have. IMO this would be even easier for us to do now that we are so much larger than in early 2020.

Also, as far as I can tell Alvea also seems like great practice of rapidly building a team.

Random thought: another way in which such a group could prepare for action is to have some experience commissioning forecasts on short notice from platforms like Good Judgment, Metaculus, Hypermind, etc., so that when there's some emergency (or signs that there might soon be an emergency, a la the early-Jan evidence about what became the COVID-19 pandemic), ALERT can immediately commission crowdcasts that help to track the development or advent of the emergency.

See also what Linch proposes in "Why short-range forecasting can be useful for longtermism":

To do this, I propose the hypothetical example of a futuristic EA Early Warning Forecasting Center. The main intent is that, in the lead up to or early stages of potential major crises (particularly in bio and AI), EAs can potentially (a) have several weeks of lead time to divert our efforts to respond rapidly to such crises and (b) target those efforts effectively.

Yep, loved it. ALERT wants to add readiness exercises, network capital portfolio, and a pre-allocated budget on top. 

Also the direction of ALERT is generally more on "doing". Doing seems often very different from forecasting, often needs different people - part of the relevant skills is plausibly even anticorrelated.

Y'all are fully complementary I think. From Linch's proposal:

So the appropriate structure of an elite Forecasting Center might be to pair it up with a elite crisis response unit, possibly in a network/constellation model such that most people are very part-time before urgent crises, such that the additional foresight is guaranteed to be acted on, rather than tossing the forecasts to the rest of the movement (whether decisionmakers or community members) to be acted on later.

I think if you're looking to hire someone for this role, you might want to provide a lot more information about the role (expected hours, responsibilities, start date, salary, etc.). Currently there's virtually no information provided and I wouldn't expect you would find great and qualified candidates - which would be a shame given how useful this project could be!

I love this idea! I've also been thinking a lot about the lack of quick-response capabilities within EA during the pandemic, so I think it could be a very impactful project. Having coordinated Our World in Data's work on COVID for the last two years, I'd be very happy to be in touch and contribute to anything data-science-related once you start to plan things.

I'm a huge fan of this proposal and I think it's a real missed opportunity that we didn't have these capabilities set up for Covid.

This seems like a really great thing to try at small scale first. Seems important to have a larger vision but make Little Bets to start, as Peter Sims or Cal Newport would say. You don't want to start with 30+ people with serious expertise at 90% likelihood of conversion because you want to anneal into a good structure, not bake your early mistakes into the lasting organizational culture. (Maybe you had already planned on this but seems worth clarifying as one of the most common mistakes made by EAs.)

Great point. We haven't made any irreversible decisions: we're letting the new director (someone with actual ops chops) design the sample path.

Great idea! One way that I could see an org like this staying busy when not responding to emergencies is that it could train other more specialized organizations on how to... put together a team to respond to emergencies. This could amplify its impact and help with networking. ALERT could even train PMs to deploy to other organizations in emergency situations. A lot of institutions are already optimally positioned to do good but lack the capacity in emergencies.

I love this idea! Curious to hear more of the details of what a more 'active' group would look like versus a more reserve like  format. I know in my work (COVID-19 response), we have a reserve medical and public health corps that helps out as various needs emerge related to skillset and qualifications. 

I would love to be a part of this.

I really like this idea. It seems like a very practical response to address some of the coordination issues that I saw during Covid-19.

I suggested something similar for the FTX project ideas competition.

Developing GCR scenario response teams and plans
Global catastrophic risks 

As Covid-19 demonstrated, groups are unable to efficiently mobilise and coordinate to deal with potential Global Catastrophic Risks (GCRs) or large scale events without prior preparation. This leads to extensive inefficiencies, risks and social costs. Organisations address such unpreparedness by simulating key risks and training to handle them. We would similarly like to fund relevant institutions and organisations teams to simulate GCR related outcomes (e.g, nuclear attacks, wars or pandemic outbreaks) to develop and practice responses and disseminate best practice.


In my half-baked vision, I imagined teams of familiarised topic experts for different GCRs, perhaps with an operational and a communication team to support them. I think that this is pretty aligned (through obviously much better thought out).  I think that the supporting institution is an excellent idea. I wonder if the institution should be dormant before trigger events, though, as it might be better for it to be working to establish preexisting trust and social capital with other key institutions and on key digital networks etc.

This looks like a much needed inititative. I'm interested to sign up for the reserve, it looks not unlike the type of work I've done in the past .

This looks like an experiment worth trying out on scale. Will sign up for the team when you start the process!

I would be very interested in being a part of this program. It is right up my alley!

Very enjoyable reading; fantastic examples