As an adjunct to my research on high-level machine intelligence (HLMI) risk, recently posted requesting assistance on likelihood values, the second half of the ranking includes subjective judgments on the overall impact on international stability and security. My collection window for the project is closing soon, so any contribution to either survey would be greatly appreciated.
Please share your perspectives on whether each AI scenario condition listed, if it were to occur, would greatly increase, greatly decrease, or have no effect at all on society and security.
This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three/four conditions (e.g., fast) on the left and asks the participant to:
- Please rank the degree to which each condition could impact social stability or security (greatly increase to decrease) in the long term. For conditions (e.g., technologies) that you don't believe would cause an increase or a decrease, just choose the best option in your view or leave it as "no effect."
The survey is more of a ranking than a questionnaire and if the topic is familiar to you the detailed writeups are likely unnecessary. The goal is to classify the degree of impact we could potentially expect from each condition (e.g., fast takeoff, deep learning scaling to HLMI, concentrated control of HLMI).
I’d appreciate any help that you can provide on this! These values are subjective, and some will likely have no effect at all, but the values will be very helpful in categorizing each individual dimension on the degree of overall risk to civilization.
This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios. The project aims to highlight risks and paths that receive less consideration (e.g., structural, decision/value erosion, global failure cascades) and structure a framework of potential futures.
For further details on the methodology, purpose, and overall study please check out the original post here.
Thank you, I really appreciate any help you can provide.
Yes, I'm having a tough time explaining the purpose of the model which has led to very long convoluted descriptions. I am not predicting or attempting to predict any of these conditions. I understand your skepticism and share it. I believe overall that it can be a waste of time to concentrate on forecasting very difficult or impossible to measure issues. That is certainly not the purpose of this.
The point of this is to construct broad categories of plausible (approximately) scenarios and impactful ones (broadly, a lot of these should be marked no effect) simply to create categories. However, the output does not say what is or is not going to happen, or is certainly best or worst, it will be a narrative showing all options which will have mixed values (the combining values process changes these all up regardless). The likelihood survey values are much more valuable, but the best assessment of impact is important too (but admittedly much less clear).
For example, all the values for individual conditions (e.g., paradigm) will be calculated with every other, but the output is not "fast takeoff scenario is 80% likely," or "greatly decrease x" the output will be potential scenario elements that are mixed e.g., "fast takeoff" (unlikely, but high impact) and "new paradigm"(likely, but moderate impact) which will just be one of the many possible outputs. Thus, thousands of these pairs will be clustered and we'll use the clusters to develop scenarios.
For the likelihood questions this is clearer I think, it multiplies (or adds) depending on the variable to highlight how one condition is affected by the other. Ideally, and this is the plan depending on how this goes, is to have a workshop or roundtable to go through each one of these pairs (e.g., fast takeoff, and distribution, is a value pair) and request expert judgment on how one may affect the other.
While this is somewhat imprecise by design but an AI researcher's view on whether deep learning will lead to AGI, or if prosaic AI is potentially more or less destabilizing, I believe is much more trustworthy than a random guess.
I have realized though that in future iterations (if there are any) I most certainly will not ask likelihood questions. That tends to get folks thinking about probability which would require more precise questions. And the impact is just a tough one. But the combination is important. Other projects we've done with this have been for climate change and arctic politics which were also quite vague, yet valuable in the end.
It looks like I just submitted another long-convoluted description lol. I get carried away attempting to explain the issue.
In any event, what I'm requesting is the best estimates from knowledgeable people to form groups for the model. Which will be used to paint the range of hopefully quite unique combinations of scenarios and test the GMA method. Who knows, it may provide important insights or a new tool for the community to use.
If you have any suggestions on how to frame this better or explain (now or in the future) please let me know.