All of HaydnBelfield's Comments + Replies

Islands, nuclear winter, and trade disruption as a human existential risk factor

Just a quick one: this is great and groundbreaking work, thanks for doing it!

Linkpost: The Scientists, the Statesmen, and the Bomb

The authors' takeaway is:

"The implication of these historical outcomes is that in order to reliably affect decision-making, you must yourself be the decision-maker. Prestige, access to decision-makers, relevant expertise, and cogent reasoning are not sufficient; even with all these you are liable to be ignored. By understanding the complex workings of decision-making at the highest levels, you can improve your chances of influencing outcomes in the way you desire, but even if you understand how the game is played, you are ultimately subject to the judgment

... (read more)
3Lauro Langosco1mo
Thanks the good points and the links! I agree the arms control epistemic community is an important story here, and re-reading Adler's article I notice he even talks about how Szilard's ideas were influential after all: Despite this, in my reading Adler's article doesn't contradict the conclusions of the report: my takeaway is that "Prestige, access to decision-makers, relevant expertise, and cogent reasoning" (while not sufficient on its own) is a good foundation that can be leveraged to gain influence, if used by a community of people working strategically over a long time period, whose members gain key positions in the relevant institutions.
[Book] On Assessing the Risk of Nuclear War

This looks like absolutely fascinating, much-needed work. I particularly appreciate the variety of methodological approaches. Looking forward to reading!

The established nuke risk field deserves more engagement

Definitely agree! We should definitely engage more with the field. I would note there's good stuff, eg here, here, here, here.

Who critiques EA, and at what timestamp in the podcast?

9Ilverin1mo
It's Dr. Jeffrey Lewis at 32:08
Why AGI Timeline Research/Discourse Might Be Overrated

I assume Carl is thinking of something along the lines of "try and buy most new high-end chips". See eg Sam interviewed by Rob.

Strategic Perspectives on Long-term AI Governance: Introduction

Probably "environment-shaping", but I imagine future posts will discuss each perspective in more detail.

3MMMaas4d
(apologies for very delayed reply) Broadly, I'd see this as: * 'anticipatory' if it is directly tied to a specific policy proposal or project we want to implement ('we need to persuade everyone of the risk, so they understand the need to implement this specific governance solution'), * 'environment-shaping' (aimed at shaping key actors' norms and/or perceptions), if we do not have a strong sense of what policy we want to see adopted, but we would like to inform these actors to come up with the right choices themselves, once convinced.
A Critique of The Precipice: Chapter 6 - The Risk Landscape [Red Team Challenge]

It's really important that there is public, good-faith, well-reasoned critique of this important chapter in a central book in the field. You raise some excellent points that I'd love to see Ord (and/or others) respond to. Congratulations on your work, and thank you!

On Deference and Yudkowsky's AI Risk Estimates

More than Philip Tetlock (author of Superforecasting)?

Does that particular quote from Yudkowsky not strike you as slightly arrogant?

2Habryka2mo
Yes, definitely much more than Philip Tetlock, given that our community had strong norms of forecasting and making bets before Tetlock had done most of his work on the topic (Expert Political Forecasting was out, but as far as I can tell was not a major influence on people in the community, though I am not totally confident of that). I am generally strongly against a culture of fake modesty. If I want people to make good decisions, they need to be able to believe things about them that might sound arrogant to others. Yes, it sounds arrogant to an external audience, but it also seems true, and it seems like whether it is true should be the dominant fact on whether it is good to say.
What are EA's biggest legible achievements in x-risk?

There's a whole AI ethics and safety field that would have been much smaller and less influential.

From my paper Activism by the AI Community: Analysing Recent Achievements and Future Prospects.

"2.2 Ethics and safety 

There has been sustained activism from the AI community to emphasise that AI should be developed and deployed in a safe and beneficial manner. This has involved Open Letters, AI principles, the establishment of new centres, and influencing governments. 

The Puerto Rico Conference in January 2015 was a landmark event to promote the bene... (read more)

1acylhalide2mo
Thanks for your reply! I'll see if I can convince people using this. (Also very small point but: the pdf title [https://arxiv.org/ftp/arxiv/papers/2001/2001.06528.pdf] says "Insert your title here" when viewed on chrome atleast)
Things usually end slowly

Great post! Mass extinctions and historical societal collapses are important data sources - I would also suggest ecological regime shifts. My main takeaway is actually about multicausality: several ‘external’ shocks typically occur in a similar period. ‘Internal’ factors matter too - very similar shocks can affect societies very differently depending on their internal structure and leadership. When complex adaptive systems shift equilibria, several causes are normally at play.

Myself, Luke Kemp and Anders Sandberg (and many others!) have three seperate chapters touching on these topics in a forthcoming book on 'Historical Systemic Collapse' edited by Princeton's Miguel Centeno et al . Hopefully coming out this year.

1OllieBase2mo
Thanks Haydn! I didn't totally follow this since I'm not familiar with some of these terms, but great to hear that some more thorough literature surrounding this!
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks for this. I'm more counselling "be careful about secrecy" rather than "don't be secret". Especially be careful about secret sprints, being told you're in  a race but can't see the secret information why, and careful about "you have to take part in this secret project".

On the capability side, the shift in AI/ML publication and release norms towards staged release (not releasing full model immediately but carefully checking for misuse potential first), structured access (through APIs) and so on has been positive, I think. 

On the risks/analys... (read more)

How Could AI Governance Go Wrong?

Hi both,

Yes behavioural science isn't a topic I'm super familiar with, but it seems very important!

I think most of the focus so far has been on shifting norms/behaviour at top AI labs, for example nudging Publication and Release Norms for Responsible AI.

Recommender systems are a great example of a broader concern. Another is lethal autonomous weapons, where a big focus is "meaningful human control". Automation bias is an issue even up to the nuclear level - the concern is that people will more blindly trust ML systems, and won't disbelieve them as people d... (read more)

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks!
 

This was very much Ellsberg's view on eg the 80,000 Hours podcast:

"And it was just a lot better for Boeing and Lockheed and Northrop Grumman and General Dynamics to go that way than not to have them, then they wouldn’t be selling the weapons. And by the way what I’ve learned just recently by books like … A guys named Kofsky wrote a book called Harry Truman And The War Scare of 1947.

Reveals that at the end of the war, Ford and GM who had made most of our bombers went back to making cars very profitably. But Boeing and Lockheed didn’t make produ

... (read more)
How Could AI Governance Go Wrong?

Hi, yes good question, and one that has been much discussed - here's three papers on the topic. I'm personally of the view that there shouldn't really be much conflict/contradictions - we're all pushing for the safe, beneficial and responsible development and deployment of AI, and there's lots of common ground.

Bridging near- and long-term concerns about AI 

Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy 

Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence 

8tamgent2mo
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it's all part of the same overarching problem area.
Some unfun lessons I learned as a junior grantmaker

This is how I've responded to positive funding news before, seems right.

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks! And thanks for this link. Very moving on their sense of powerlessness.

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks Rohin. Yes I should perhaps have spelled this out more. I was thinking about two things - focussed on those two stages of advocacy and participation.

1. Don't just get swept up in race rhetoric and join the advocacy: "oh there's nothing we can do to prevent this, we may as well just join and be loud advocates so we have some chance to shape it". Well no, whether a sprint occurs is not just in the hands of politicians and the military, but also to a large extent in the hands of scientists. Scientists have proven crucial to advocacy for, and participat... (read more)

9Rohin Shah3mo
Cool, that makes sense, thanks!
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks Pablo for those thoughts and the link - very interesting to read in his own words.

I completely agree that stopping a 'sprint' project is very hard - probably harder than not beginning one. The US didn't slow down on ICBMs in 1960-2 either. 

We can see some of the mechanisms by which this occurs around biological weapons programs. Nixon unilaterally ended the US one; Brezhnev increased the size of the secret Soviet one. So in the USSR there was a big political/military/industrial complex with a stake in the growth of the program and substantial l... (read more)

Risks from Autonomous Weapon Systems and Military AI

I don't think its a hole at all, I think its quite reasonable to focus on major states. The private sector approach is a different one with a whole different set of actors/interventions/literature - completely makes sense that its outside the scope of this report. I was just doing classic whatabouterism, wondering about your take on a related but seperate approach.

Btw I completely agree with you about cluster munitions. 

4Gentzel3mo
The cluster munitions divestment example seems plausibly somewhat more successful in the West, but not elsewhere (e.g. the companies that remain on the "Hall of Shame [https://stopexplosiveinvestments.org/disinvestment/hall-of-shame/]" list). I'd expect something similar here if the pressure against LAWs were narrow (e.g. against particular types with low strategic value). Decreased demand does seem more relevant than decreased investment though. If LAWs are stigmatized entirely, and countries like the U.S. don't see a way to tech their way out to sustain advantage, then you might not get the same degree of influence in the first place since demand remains. I find it interesting that the U.S. wouldn't sign the Convention on Cluster Munitions, but also doesn't seem to be buying or selling any more. One implication might be that the stigma disincentivizes change/tech progress: since more discriminant cluster munitions would be stigmatized as well. I presume this reduces the number of such weapons, but increases the risk of collateral damage per weapon by slowing the removal of older, more indiscriminate/failure prone weapons from arsenals. https://www.washingtonpost.com/news/checkpoint/wp/2016/09/02/why-the-last-u-s-company-making-cluster-bombs-wont-produce-them-anymore/ While in principle, you could drive down the civilian harm with new smaller bomblets that reliably deactivate themselves if they don't find a military target, as far as I can tell, to the degree that the U.S. is replacing cluster bombs, it is just doing so with big indiscriminate bombs (BLU 136/BLU134) that will just shower a large target area with fragments.
Risks from Autonomous Weapon Systems and Military AI

Great report! Looking forward to digging into it more. 

It definitely makes sense to focus on (major) states. However a different intervention I don't think I saw in the piece is about targeting the private sector - those actually developing the tech. E.g. Reprogramming war by Pax for Peace, a Dutch NGO. They describe the project as follows:

"This is part of the PAX project aimed at dissuading the private sector from contributing to the development of lethal autonomous weapons. These weapons pose a serious threat to international peace and security, and

... (read more)
3christian.r3mo
Hi Haydn, That’s a great point. I think you’re right — I should have dug a bit deeper on how the private sector fits into this. I think cyber is an example where the private sector has really helped to lead — like Microsoft’s involvement at the UN debates, the Paris Call, the Cybersecurity Tech Accord, and others — and maybe that’s an example of how industry stakeholders can be engaged. I also think that TEVV-related norms and confidence building measures would probably involve leading companies. I still broadly thinking that states are the lever to target at this stage in the problem, given that they would be (or are) driving demand. I am also always a little unsure about using cluster munitions as an example of success — both because I think autonomous weapons are just a different beast in terms of military utility, and of course because of the breaches (including recently). Thank you again for pointing out that hole in the report!
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks for these questions! I tried to answer your first in my reply to Christian.

On your second, "delaying development" makes it sound like the natural outcome/null hypothesis is a sprint - but its remarkable how the more 'natural' outcome was to not sprint, and how much effort it took to make the US sprint.

To get initial interest at the beginning of the war required lots of advocacy from top scientists, like Einstein. Even then, the USA  didn't really do anything from 1939 until 1941, when an Australian scientist went to the USA, persuaded US scient... (read more)

1Sharmake3mo
The big difference is Japan doesn't even exist as a nation or culture due to Operation Downfall, starvation and insanity. The reason is without nukes, the invasion of Japan would begin, and one of the most important characteristics they had is both an entire generation under propaganda, which is enough to change cultural values, and their near fanaticism of honorable death. Death and battle was frankly over glorified in Imperial Japan, and soldiers would virtually never surrender. The result is the non existence of Japan in several years.
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thanks for the kind words Christian - I'm looking forward to reading that report, it sounds fascinating.

I agree with your first point - I say "They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons." Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn't stop the arms buildup.

The quest... (read more)

Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)

Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions

Climate change - Problem profile

Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.

On first point, I think that that kind of argument could be found in Jonathan B. Wiener's work on "'risk-superior moves'—better options that reduce multiple risks in concert." See e.g.

On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave - heat stress and agriculture/economic shock ... (read more)

2John G. Halstead3mo
(In that case, he said that the post ignores indirect risks, which isn't true.) On your first point, my claim was "I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change". The papers you shared also do not make this argument. I'm not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change. On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.
Climate change - Problem profile

Note that "humanity is doomed" is not the same as 'direct extinction', as there are many other ways for us to waste our potential.

I think its an interesting argument, but I'm unsure that we can get to a rigorous, defensible distinction between 'direct' and 'indirect' risks. I'm also unsure how this framework fits with the "risk/risk factor" framework, or the 'hazard/vulnerability/exposure' framework that's common across disaster risk reduction, business + govt planning, etc. I'd be interested in hearing more in favour of this view, and in favour of the 2 c... (read more)

I think all effects in practice are indirect, but "direct" can be used to mean a causal effect about which we have direct evidence, i.e. we made observations about the cause on the outcome without need for discussing intermediate outcomes, not from piecing multiple steps of causal effects together in a chain. The longer the causal chain, the more likely there are to be effects in the opposite direction along parallel chains. Furthermore, we should generally be skeptical of any causal claim, so the longer the causal chain, the more claims of which we should be skeptical, and the weaker we should expect the overall effect.

I'm not sure I understand why you don't think the in/direct distinction is useful. 

I have worked on climate risk for many years and I genuinely don't understand how one could think it is in the same ballpark as AI, biorisk or nuclear risk. This is especially true now that the risk of >6 degrees seems to be negligible. If I read about biorisk, I can immediately see the argument for how it could kill more than 50% of the population in the next 10-20 years. With climate change, for all the literature I have read, I just don't understand how one could ... (read more)

Climate change - Problem profile

For other readers that might be similarly confused to me - there's more in the profile on 'indirect extinction risks' and on other longrun effects on humanity's potential.

Seems a bit odd to me to just post the 'direct extinction' bit, as essentially no serious researcher argues that there is a significant chance that climate change could 'directly' (and we can debate what that means) cause extinction. However, maybe this view is more widespread amongst the general public (and therefore worth responding to)?

On 'indirect risk', I'd be interested in hearing m... (read more)

Haydn, would you be able to quantify the probability that, in your assessment, climate change will indirectly cause human extinction this century, relative to biorisk? Benjamin Hilton speculates that it's less than 0.1x, but it's not clear to me whether you disagree with this estimate (assuming you do) because you think it's closer to 0.3x, 1x, or 3x. Having more clarity on this would help me understand this discussion better, I think.

Strongly agree with Haydn here on the critique. Indeed,  focusing primarily on direct risks and ignoring the indirect risks or,  worse, making a claim about the size of the indirect risks that has no basis in anything but stating it confidently really seems unfortunate, as it feels like a strawman.

Justification for low indirect risk from the article:
"That said, we still think this risk is relatively low. If climate change poses something like a 1 in 10,000 risk of extinction by itself, our guess is that its contribution to other existential risks... (read more)

I think there is good reason to focus on direct extinction given their audience. As they say at the top of their piece, "Across the world, over half of young people believe that, as a result of climate change, humanity is doomed"

What is your response to the argument that because the direct effects of AI, bio and nuclear war are much larger than the effects of climate change, the indirect effects are also likely much larger? To think that climate change has bigger scale than eg bio, you would have to think that even though climate's direct effects are small... (read more)

Information security considerations for AI and the long term future

Thanks for this Jeffrey and Lennart! Very interesting, and I broadly agree. Good area for people to gain skills/expertise, and private companies should beef up their infosec to make it harder for them to be hacked and stop some adversaries.

However,  I think its worth being humble/realistic. IMO a small/medium tech company (even Big Tech themselves) are not going to be able to stop a motivated state-linked actor from the P5. Would you broadly agree?

5Jeffrey Ladish3mo
I don't think an ordinary small/medium tech company can succeed at this. I think it's possible with significant (extraordinary) effort, but that sort of remains to be seen. As I said in another thread [https://forum.effectivealtruism.org/posts/WqQDCCLWbYfFRwubf/information-security-considerations-for-ai-and-the-long-term?commentId=TgaGrEzyysEbL4g89] : >> I think it's an open question right now. I expect it's possible with the right resources and environment, but I might be wrong. I think it's worth treating as an untested hypothesis ( that we can secure X kind of system for Y application of resources ), and to try to get more information to test the hypothesis. If AGI development is impossible to secure, that cuts off a lot of potential alignment strategies. So it seems really worth trying to find out if it's possible.
What to include in a guest lecture on existential risks from AI?

AGI Safety Fundamentals has the best resources and reading guides. Best short intros are the very short (500 words) intro and a slightly longer one, both from Kelsey Piper.

You might find a lecture of mine useful:
 

1Aryeh Englander4mo
Thanks!
13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?

GLAAD is a really useful case study, thanks for highlighting it. Participant Media was another model I had in mind - they produced Contagion, Spotlight, Green Book, An Inconvenient Truth, Citizenfour, Food Inc, and The Post amongst others.

13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?

Hell yeah, I can't wait to watch this and get really depressed. Have you read or watched When The Wind Blows? Seems a similar tone.

3James_Banks4mo
I hadn't heard of When the Wind Blows before. From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows' animation) as strongly as to live-action. (And then from the Wikipedia summary, it sounds pretty similar.)
13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?

I will check out neXt, thanks. I like the idea of reboots, very Edge Of Tomorrow.

13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?

Yes of course! KSR hinted there may be some interest in Ministry - Mars seems stuck in development hell unfortunately.

1Charlotte4mo
I am sorry. What is KSR?
"Long-Termism" vs. "Existential Risk"

I didn't mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman's podcast) are pointing out that the existential risk argument can go through without the longtermism argument. 

I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn't mea... (read more)

No offense to Neel's writing, but it's instructive that Scott manages to write the same thesis so much better. It:

  • is 1/3 the length
    • Caveats are naturally interspersed, e.g. "Philosophers shouldn't be constrained by PR."
    • No extraneous content about Norman Borlaug, leverage, etc
  • has a less bossy title
  • distills the core question using crisp phrasing, e.g. "Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?" (my emphasis)

...and a ton of other things. Long-live the short EA Forum post!

Thanks, I had read that but failed to internalize how much it was saying this same thing. Sorry to Neel for accidentally plagiarizing him.

Case for emergency response teams

I think this is a very cool idea!

To offer some examples of similar things that I've been involved in - the trigger has often been some new regulatory or legislative process. 

  • "woah the EU is going to regulate for AI safety ... we should get some people together to work out how this could be helpful/harmful, whether/how to nudge, what to say, and whether we need someone full-time on this" -> here
  • "woah the US (NIST) is going to regulate for AI safety..." -> here
  • "woah the UK wants to have a new Resilience Strategy..." -> here
  • "woah the UK wants to
... (read more)
7Alex D2mo
Just thinking out loud, natural triggers in the longtermist biosecurity space (where I'm by far most familiar) would be: 1. a disease event or other early warning signal from public health surveillance 2. new science & tech development in virology/biotech/etc 3. shifts in international relations or norms relevant to state bioweapons programs 4. indications that a non-state group was pursuing existentially risky bio capabilities ... anything else?
Case for emergency response teams

This is a side-note, but I dislike the EA jargon terms hinge/hingey/hinginess and think we should use the term "critical juncture" and "criticalness" instead. This is the common term used in political science, international relations and other social sciences. Its better theorised and empirically backed than "hingey", doesn't sound silly, and is more legible to a wider community.

Critical Junctures - Oxford Handbooks Online

The Study of Critical Junctures - JSTOR

https://users.ox.ac.uk/~ssfc0073/Writings%20pdf/Critical%20Junctures%20Ox%20HB%20final.pdf 

h... (read more)

Where is the Social Justice in EA?

I just linked to that too! I think about it all the time.

Where is the Social Justice in EA?

I think your assessment of the lack of diversity in EA is right, that this is a problem (we're missing out on talented people, coalition allies, specific knowledge, new ideas, wider perspectives, etc), and that we need to working towards improving this situation. On all three (questions 1-3), see this  statement from CEA. Thanks for raising this!

In terms of what we can be doing, being inclusive in hiring and pipeline-building seem very important - Open Philanthropy are amongst the best practice on this (see here) and Magnify Mentoring are doing awesom... (read more)

4Guy Raveh4mo
Theoretically - but GiveWell seems to prefer to keep money rather than give it directly. There may or may not be good reasons for that, but it's not a strong message for direct empowerment of marginalised communities.
Announcing the EU Tech Policy Fellowship

This is really awesome! Well done for launching this, very excited to see what you achieve

Effectiveness is a Conjunction of Multipliers

I know lots of people who are incredibly impactful and are parents and/or work in academia. For many,  career choices such as academia are a good route to impact. For many, having children is a core part of leading a good life for them and (to take a very narrow lens) is instrumentally important to their productivity
So I find those claims false, and find it very odd to describe those choices as "concession[s] to other selfish or altruistic goals".  We shouldn't be implying "maximising your impact (and by implication being a good EA) is hard to ma... (read more)

Thanks for this comment, I made minor edits to that point clarifying that academia can be good or bad.

First off, I think we should separate concerns of truth from those of offputtingness, and be clear about which is which. With that said, I think "concession to other selfish or altruistic goals" is true to the best of my knowledge. Here's a version of it that I think about, which is still true but probably less offputting, and could have been substituted for that bullet point if I were more careful and less concise:

When your goal is to maximize impact, but... (read more)

I know lots of people who are incredibly impactful and are parents and/or work in academia

This doesn't seem like much evidence one way or the other unless you can directly observe or infer the counterfactual. 

If you take OP at face value, you're traversing at least 6-7 OOMs within choices that can be made by the same individual, so it seems very plausible that someone can be observed to be extremely impactful on an absolute scale while still operating at only 10% of their personal best, or less. (also there is variance in impact across people for hard-to-control reasons, for example intelligence or nationality).

Free to attend: Cambridge Conference on Catastrophic Risk (19-21 April)

Online participation is open. Its a hybrid format so some people will be there in person - but we're at in-person capacity.

Early Reflections and Resources on the Russian Invasion of Ukraine

This a really useful set of thoughts and further readings, thanks for sharing Seth. I especially liked your points on policy windows, and on cross-fertilisation between the nuclear and GCR fields.

Shared the Twitter thread below on Feb 28, crossposting as similar reflections:

A few thoughts on global catastrophic risk a few days into Putin's invasion - would be very interested in other's reflections! 

  • Nukes. Still the most urgent risk. Probably the most dangerous situation since 1983? Crucial to avoid misperceptions/mistakes 
  • Networks. War is having
... (read more)
New Nuclear Security Grantmaking Programme at Longview Philanthropy

Really excellent that you spotted this gap in the philanthropic market and moved in to fill it. Well done! Hope you hire someone excellent.

Concerns with the Wellbeing of Future Generations Bill

Just briefly on (4) - Govts of all parties oppose all PMBs as a matter of course, especially ones from the Lords. Very few actually become law (see eg here). This pattern is less due to the specifics of any particular Bill, and more about govt control of the parliamentary timetable, and govts' ability to claim credit for legislation. One's options if one comes top of the PMB ballot is to 1) try and get the Govt to support it or 2) use it as a campaigning device (or I guess 3 try both).

I'm not so sure that the ideas in this Bill couldn't get picked up by Co... (read more)

8John_Myers5mo
Thank you Haydn. I agree about the base rate for PMBs. They can get attention from the Government –in particular as I think you know this one was designed by us to be acceptable to the Government, and the Secretary of State said that it was ‘cracking’ and that he was keen to ‘steal all of the ideas in it’. https://bills.parliament.uk/bills/3047 [https://bills.parliament.uk/bills/3047] So I recognize that general campaigning is also a valid use of PMBs in general. As you know I’m very keen to work collaboratively on possible approaches to getting more longtermist perspectives in government.
Concerns with the Wellbeing of Future Generations Bill

Hi Larks and John, Thanks for sharing this with me ahead of posting. 

Five notes for readers.

1.

First, this Bill isn't an EA Bill. This is recognised a bit in the post, but I really want to underline it. Its led by Lord John Bird and his office, and supported by Today for Tomorrow.  It mostly builds on the Welsh Commissioner for Future Generations. None or them are 'EA'. There are about 3-4 supporters that could plausibly be labelled EA, out of ~100 institutional supporters.

2.

Second, on the merits of the Bill - to add a little to Sam's excellent ov... (read more)

4John_Myers5mo
Thank you Haydn! These are very constructive comments. To respond briefly: 3. I am not primarily focused on what the Bill's intentions are, but on the overall likely outcomes from its presentation and enactment. In our view there is a substantial chance that the Bill as currently drafted would overall damage welfare if passed in its current form. We agree the chance of that happening is almost zero, but there may be future Bills. I have a weaker view on the effect of the Bill as a campaigning tool but I still have substantial concerns given the presence of a number of potentially harmful but potentially popular provisions in it. 4. I too prefer for a non-partisan approach. However we think there is negligible chance of this Government supporting it – I understand the Government has indicated it does not support the Bill – and only a small chance of a future Conservative government supporting it. I think the Bill would need to be drastically revised to give it a good chance of support by a Conservative government. 5. As we said, I agree that something in the direction of the ‘three lines of defence’ approach to risks could be very helpful, and that it would be very helpful to work on future sector-specific approaches in other sectors and better forecasting and risk identification in general.
Load More