Hide table of contents

Note: I'm not claiming that I know of a new x-risk, I just want to know about the right policy in this situation

If someone identifies a new existential or catastrophic risk, it seems prudent to avoid publishing it widely as this may constitute an infohazard.

However, it probably doesn't make sense to keep this information to oneself since other people can begin to work on research and mitigation if they are aware of the risk.

Is there a group of people to disclose new x-risks to that can make relevant experts aware of the risk? In general, how and where should someone disclose a new x-risk?

20

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

Not a comprehensive answer but a few ideas. I don't know of any existing documentation or organisation about how to do this. 

  1. I think talking to people currently heavily involved in funding x-risk mitigation efforts is a good start. People with a proven track record of taking x-risks seriously are more likely to adequately consider the relevant concerns and assist by progressing the discussion and coming up with meaningful mitigation strategies. For example, you could email Nick Bostrom or someone at Open Philanthropy. I've heard Kevin Esvelt is someone with a track record or taking info-hazards seriously too. 
  2. Maybe don't go directly to super critical people in existing efforts. It's possible that you should qualify your ideas first by talking to other experts (who you trust) in whichever domain is likely to know about those risks (although of course you'd want to avoid losing control of the narrative, such as by someone you tell overzealously raising alarm and damaging your credibility). 

There's probably lots of specific reasoning that might be necessary based on the relevant risk (for example if it's tied up with specific economic activity the way AI capabilities development is). 

I endorse the suggestion to talk to talking to someone senior at Open Phil. EA doesn't have a centralized decisionmaker, but Open Phil might be closest as a generally trusted group which is used to handling these issues.

Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!

5
Linch
I think someone like Esvelt (and also Greg, who personally answered in the affirmative) will probably respond. Even if they are too busy to do a call, they'll know the appropriate junior-level people to triage things to. 

To build on Linch's response here:
I work on the biosecurity & pandemic preparedness team at Open Philanthropy. Info hazard disclosure questions are often gnarly. I'm very happy to help troubleshoot these sorts of issues, including both general questions and more specific concerns. The best way to contact me, anonymously or non-anonymously, is through this short form. (Alternatively, you could reach my colleague Andrew Snyder-Beattie here.) Importantly, if you're reaching out, please do not include potentially sensitive details of info hazards in form submissions – if necessary, we can arrange more secure means of follow-up communication, anonymous or otherwise (e.g., a phone call). 

The guiding principle I recommend is 'disclose in the manner which maximally advantages good actors over bad actors'. As you note, this usually will mean something between 'public broadcast' and 'keep it to yourself', and perhaps something in and around responsible disclosure in software engineering: try to get the message to those who can help mitigate the vulnerability without it leaking to those who might exploit it.

On how to actually do it, I mostly agree with Bloom's answer. One thing to add is although I can't speak for OP staff, Esvelt, etc., I'd expect - like me - they would far rather have someone 'pester' them with a mistaken worry than see a significant concern get widely disseminated because someone was too nervous to reach out to them directly.

Speaking for myself: If something comes up where you think I would be worth talking to, please do get in touch so we can arrange a further conversation. I don't need to know (and I would recommend against including) particular details in the first instance.

(As perhaps goes without saying, at least for bio - and perhaps elsewhere - I strongly recommend against people trying to generate hazards, 'red teaming', etc.)

Generally speaking, I would suggest a shift of focus away from particular risks which arise from emerging technologies, and towards the machinery which is generating all such risks, an ever accelerating knowledge explosion.

It's natural to see a particular risk and wish to do something about it.  But such a limited focus is not really fully rational once we realize that it doesn't really matter if we remove one particular existential risk unless we can remove them all.   As example, if I knew how to make genetic engineering fully safe why would that matter if we then go on to have a nuclear war?

It's a logic failure to assume, as seemingly almost all "experts" do, that we can continue to enthusiastically fuel an ever accelerating knowledge explosion and then somehow successfully manage every existential risk which emerges from that process, every day forever.  

We're failing to grasp what the concept of acceleration actually means.  It means that if the knowledge explosion is going at, say, 50mph today, tomorrow it will be 75mph, and then 150mph, and then 300mph etc.  Sooner or later this accelerating process of power accumulation will exceed the human ability to manage.  No one can predict exactly when or how we'll crash the system, but simple common sense logic demonstrates it will happen eventually on our current course.

The "experts" would have us focus on the details of particular emerging technological threats.   The experts are wrong.  What we need to be focused on instead is the knowledge explosion assembly line which is generating all the threats.

The way I deal with info-hazards in general is that I balance the risks and gains of talking about it with specific people. I haven't wanted to talk to "EA seniors" unless I know them well enough to trust them. But I do talk to people, because it helps me grow my own understanding, and that might help me or them do something about it.

I don't think you know me well enough to trust me, but I'd be happy to hear about it and give feedback on the reasoning.

Comments1
Sorted by Click to highlight new comments since:

It's a very important question.

However, it probably doesn't make sense to keep this information to oneself since other people can begin to work on research and mitigation if they are aware of the risk.

I don't think this is always the case. In anthropogenic x-risk domains, it can be very hard to decrease the chance of an existential catastrophe from a certain technology, and very easy to inadvertently increase it (by drawing attention to an info hazard). Even if the researchers (within EA) are very successful, their work can easily be ignored by the relevant actors in the name of competitiveness ("our for-profit public-benefit company takes the risk much more seriously than the competitors, so it's better if we race full speed ahead", "regulating companies in this field would make China get that technology first", etc.).

(See also: The Vulnerable World Hypothesis.)

Curated and popular this week
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 17m read
 · 
TL;DR Exactly one year after receiving our seed funding upon completion of the Charity Entrepreneurship program, we (Miri and Evan) look back on our first year of operations, discuss our plans for the future, and launch our fundraising for our Year 2 budget. Family Planning could be one of the most cost-effective public health interventions available. Reducing unintended pregnancies lowers maternal mortality, decreases rates of unsafe abortions, and reduces maternal morbidity. Increasing the interval between births lowers under-five mortality. Allowing women to control their reproductive health leads to improved education and a significant increase in their income. Many excellent organisations have laid out the case for Family Planning, most recently GiveWell.[1] In many low and middle income countries, many women who want to delay or prevent their next pregnancy can not access contraceptives due to poor supply chains and high costs. Access to Medicines Initiative (AMI) was incubated by Ambitious Impact’s Charity Entrepreneurship Incubation Program in 2024 with the goal of increasing the availability of contraceptives and other essential medicines.[2] The Problem Maternal mortality is a serious problem in Nigeria. Globally, almost 28.5% of all maternal deaths occur in Nigeria. This is driven by Nigeria’s staggeringly high maternal mortality rate of 1,047 deaths per 100,000 live births, the third highest in the world. To illustrate the magnitude, for the U.K., this number is 8 deaths per 100,000 live births.   While there are many contributing factors, 29% of pregnancies in Nigeria are unintended. 6 out of 10 women of reproductive age in Nigeria have an unmet need for contraception, and fulfilling these needs would likely prevent almost 11,000 maternal deaths per year. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three do
 ·  · 1m read
 · 
Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world. Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation. Learn more about our advising program and apply here. Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions. Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.