Update: Replies say this is not a good idea. I haven't deleted the post however so that people could still read the replies.
Idea came to me when thinking about this post
There are x-risks we know of. What about the x-risks we face that we don't even know about? Would it be worth placing a financial bounty to incentivise their research and disclosure? Here's my proposal. Keen on feedback or thoughts. Keen on anyone who wants to seriously take this forward.
How it works
Some org or set of orgs set this up. They maintain a list of known x-risks, well-defined. They maintain a list of criteria for what makes an x-risk worth knowing about. People can submit a report disclosing any new x-risk that doesn't exist on the list, but satisfies the criteria for being worth looking into. They are paid a bounty for the same.
Now let's get into the details.
List of known x-risks
Some formalisation and rigid definition would be required here, to avoid exploitation of loopholes in the bounty. This list will be maintained. There could also be a second more private list of x-risk too dangerous to even mention publicly. (Although I do sincerely hope this list is empty today, you never know.)
Criteria for an x-risk to be "worth knowing about"
I assume the main criteria would be:
- Timeframe. in which the x-risk could be triggered. 30 years? 200 years?
- Impact. Extinction or just massive population reduction? Extinction with painless death or mass suffering? This depends on the org's subective stances on suffering-based ethics, long-termism, etc. An org for instance might not prioritise the knowledge of x-risk that cause massive population reduction (nukes) as much as they prioritise the knowledge of x-risks that cause extinction (AI safety), because they believe a reduced population can still regrow itself. Another org might find futures involving mass suffering or torture to be worse than even extinction.
- Likelihood. Obviously x-risks only matter if they cross some minimum threshold of likelihood. I'm not sure if head-on estimating likelihood is the best approach, but maybe it'll be necessary to define objective criteria for the bounty. Need to look at key cruxes for odds of triggering of the risk.
More criteria can be thought of and maintained.
Bounty size
Bounty size can scale with likelihood and impact of the disclosed x-risk. For instance it might even be worth paying $100M for disclosure of an x-risk that has >5% annual odds of occuring. Onus is on disclosing party to estimate likelihood and impact, although this can be further debated between all parties in an iterated fashion.
One could try determining optimal bounty sizes. This will require estimating the incremental odds of a disclosure happening by increased bounty size. It will also require estimating opportunity costs of that money being stuck as a bounty, instead of being allocated elsewhere.
I assume many people are not motivated by money in the first place, and would write the report even for fairly low renumeration. Some people ofcourse are motivated by money, if they are devoting their idle time to it.
Large bounties can however increase public consciousness because they make good headlines. This in turn could increase the number of people who are aware of x-risks and thinking about them. Like how many college students try solving P v NP or the Clay math problems for fun.
Some research might require funding before being shown to the panel, and someone might be willing to privately fund if they know they can recoup the cost from the bounty. One could even imagine open-ended search in this fashion. An org could get private funding for finding x-risks even before they have found one - as long as the private funder knows they can receive payout from the bounty if one is found. This requires a very large bounty to be effective though.
An alternate to private funding with single bounty is an iterated approach. The disclosing party can obtain funding for their research directly from the bounty-providing org. This will now not be structured as a simple bounty and will have its own considerations. I haven't discussed them here but they're definitely worth discussing.
Cost of capital
Money allocated to the bounty has to be idle, to ensure credibility of the bounty. At best it could be passively invested as stocks or crypto, while waiting for someone to claim it. Some EA orgs may already have idle assets which they haven't yet allocated. Offering such a bounty could be a good idea for such orgs. Once these orgs have less idle assets, then they will have take the tough call of how much to keep as bounty and how much to divert elsewhere. This will depend on the time-value from both opportunities, where time-value will be measured both in monetary and non-monetary terms.
Infohazards
There could be some x-risks that are also infohazards. In this case, the panel will have to interact privately with the disclosing party. If the bounty is paid out for such a disclosure - either the bounty payout itself should be made private knowledge (which may be hard), or people will get to know the bounty was paid out but not know for what. The diclosing party should be able to disclose anonymously, so that info about the disclosing party does not lead to guesses on what the disclosure was. The panel therefore needs to be highly trustworthy. There should also be proper information on how the diclosing party can maintain their anonymity during disclosure. Be it using digital means (Tor, linux, etc) or physical (private addresses to visit or drop letters to).
Most importantly, the panel also needs to be competent at handling the infohazard after its discovery. Some discoveries might be intractable, so the only approach to those x-risks is to hope that no one else ever discovers them, and that's it. Others may benefit from research that could reduce the risk. Others could require lobbying governments or contacting intelligence agencies to take some secretive action that reduces the risk.
I agree that the odds of such world-damning information coming to light is fairly low, but due to epistemic humility I would still think this whole scenario is worth planning out properly. Members of the panel must have all the requisite skills to handle this - or atleast to contact all the people with requisite skills.
Org structure
There could be one panel that undertakes all the above functions such as maintaining the x-risk and criteria lists, interacting with disclosing parties and allocating bounties, making broader decisions on how big the bounty should be, handling infohazards, etc. Or these responsibilites could be distributed. Ideally the infrastructure is designed such that multiple ethical viewpoints can be supported, and multiple organisations can be involved. Both in appointing panel members, and adding more money to the bounty.
As they say, "I didn't have time to write you a short comment, so I wrote you a long one":
From your reply to HaydnBelfield: "It would be scary if we happen to live in a world where simply bringing x-risks as a general topic into public consciousness significantly increases odds of a bad actor finding a new x-risk. I also wonder how you get govts or the public to focus on solving x-risks if you don't actually want the public to spend time thinking about x-risks."
I myself was recently working on a draft of a post I was going to call "Big List of Dubious X-risks" -- the idea was to collect even silly-seeming X-risks (like civilization getting wiped out by first contact with aliens) into a list, where they could maybe be studied for commonalities or etc.
Initially, my draft had a section devoted to biological risks, but it soon became clear to me that people working in biosecurity are extremely concerned about infohazards, to the point where experts in the field (I ended up talking to some of them for unrelated reasons) think that even publishing non-technical concepts ("maybe you could make a virus that worked like this...") is almost certainly net-negative for civilization.
Furthermore, they almost implied that popularizing and "raising awareness" of GCBR risk at all was a bad idea in most situations.
And, most surprising of all, they also seemed skeptical of the general idea of a "Big List of Dubious X-risks" post, even with the biological-risks section removed and replaced by an infohazard disclaimer.
This is obviously a strange and paradoxical situation -- how can you learn about and fix something if you can't talk about it publicly? Their concerns about GCBR technical details were obviously justified, but at first I thought maybe the GCBR experts (due to the nature of their field) were being way too paranoid about milder infohazards -- it seemed crazy that we should discourage any kind of brainstorming about novel X-risks!
But I have mostly come around to their point of view as I've thought about it more.
Sometimes, the thinking up of new X-risk details is not actually very helpful for guarding against that X-risk. Precisely because there are so many different ways to create a killer pandemic (including probably some that even the experts won't manage to think up), our best defense is working on broad-spectrum technologies that are handy in lots of different scenarios. Similarly, if you were worried about something like the USA collapsing into a tyrannical dictatorship, it would possibly be helpful to try and plot out a detailed coup plan in order to see what other people might try and then attempt patch those particular security holes. But since there are probably many different paths to tyranny, at some point you'd want to stop patching individual holes and instead try to work on general countermeasures that would help stop most tyranny scenarios.
This position of secrecy seemed bizarre and unusual and incongruous with the rest of my experience of EA, but as an internet commenter my only experience of EA was the open internet discussions (and not the internal conversations at EA orgs, etc). Open internet discussions are on the extreme end of being totally public and accessible. Almost every other form of human social organization has more room for privacy, secrecy, and compartmentalization -- in-person conversations, businesses both large and small, political campaigns/movements, church groups, professional relationships like therapists/doctors/lawyers, etc. Obviously government espionage and military operations are at the far opposite end of this spectrum. So, seen in context, the worry about infohazards is not so unusual -- rather it seems more like a sensible reaction to the unusual situation of trying to discuss serious things responsibly in the unusual format of a movement using open internet discussions.
Anyways, presently I agree that encouraging people to speculate publicly about anthropogenic X-risks is a delicate and often net-negative endeavor, and thus the project of the EA Forum is more fraught than it would appear. Your idea wasn't to encourage people to speculate publicly but rather to submit private ideas to a bounty program, which works better, but is still dangerous insofar as, like Hadyn says, this is "a prize for people to come up with new dangerous ideas", which they might later publish elsewhere.
Personally, I think the EA Forum needs to up its game in terms of how it handles infohazards and provides guidance on their thinking in this area. I think rather than a bounty program, a good idea would be to create a kind of "infohazard hotline", where people who have potentially dangerous but also potentially helpful new ideas could be encouraged to share their info privately -- ideas about nuclear risk could be directed to a trusted nuclear expert, AI ideas to an AI expert, etc. This would help avoid the paradox of "how can we make progress if we can't discuss things openly?" by providing a more obvious and accessible way to share ideas with experts without making them maximally public (more legible and accessible to new people than eg knowing who to private-message about something).
" I think the EA Forum needs to up its game in terms of how it handles infohazards and provides guidance on their thinking in this area."
+1 to this
Thanks for your detailed reply.
Interesting to know about biorisks. I got a similar impression on biorisks from the posts here.
re: first bullet point, that might apply to details but what about the broad category? When I made my post I wasn't thinking of "invent a specific virus" or "invent an easier-to-produce nuclear weapon". I was thinking about listing engineered viruses and nukes as broad categories, and the bounty would be for things completely unrelated to both viruses and nukes. So you get the bounty for discovering a whole category of x-risks nobody has taken seriously till now. But yeah I didn't think this deeply enough, how distinct must an idea be from the list to be eligible.
second bullet point makes sense. Would it be worth creating official group chats across multiple orgs to discuss these issues? Maybe they already exist and I'm unaware.
Hotline sounds good. Selects for people with altruistic motivation rather than money. (Although people might still work on it because it's fun or cool, or could get them fame or attention.) Is the idea that everyone who visits the EA forum should know about the hotline?
Interesting idea. Wanted to throw in a few reflections from working at the Centre for the Study of Existential Risk for four years.
Just want to give a big plus one to the infohazards section. Several states and terrorist groups have been inspired by bioweapons information in the public domain - its a real problem. At CSER we've occassionally thought up what might be a new contributor to existential risk - and have decided not to publish on it. I'm sure Anders Sandberg has come up with tonnes too (thankfully he's on the good side!) - and has also published good stuff on them. Very important bit.
I imagine you'd get lots of kooks writing in (e.g. we get lots of Biblical prediction books in the post), so you'd need some way to sift through that. You'd also need some way to handle disagreement (eg. I think climate change is a major contributor to existential risk, some other researchers in the field do not). Also worth thinking about incentives - in a way, this is a prize for people to come up with new dangerous ideas.
Thanks for your reply! I will check out Anders Sanberg's work.
Perhaps you can just ask them not to apply.
"Claims based on religious or other supernatural evidence will not be taken seriously. We place strong emphasis on scientific methodology. Please check out following examples of reports we have awarded in the past. You are wasting your time if your proposal does not respect this."
It might be a bit rude but that's kinda the point.
Valid point. Can you think of improvements to my proposal? Maybe you can restrict the bounty to people who can prove some qualifications of conscientiousness and mental stability. So people who know they'll fail this criteria might be less likely to spend time on it. Idk how practical or effective that is.
It would be scary if we happen to live in a world where simply bringing x-risks as a general topic into public consciouness significantly increases odds of a bad actor finding a new x-risk. I also wonder how you get govts or the public to focus on solving x-risks if you don't actually want the public to spend time thinking about x-risks.
P.S. I'd request the downvoter to state why they downvoted so I can learn. One-liner will do.