This is a very welcome contribution to a professional field (ie., the GCBR-focused parts of the pandemic preparedness and biosecurity space) that can often feel opaque and poorly coordinated — sincere thanks to Max and everyone else who helped make it!
Thanks for sharing this and congrats on a very longstanding research effort!
Are you able to provide more details on the backgrounds of the “biorisk experts”? For example, the kinds of organisations they work for, their seniority (eg years of professional experience), or their prior engagement with global catastrophic biological risks specifically (as opposed to pandemic preparedness or biorisk management more broadly).
I ask because I’m wondering about potential selection effects with respect to level of concern about catastrophe/extinction from biology. Wi...
Hi!
This is Joshua, I work on the biosecurity program at the philanthropic advisor Effective Giving. In 2021, we recommended two grants to UNIDIR's work on biological risks, e.g. this report on stakeholder perspectives on the Biological Weapons Convention, which you might find interesting.
To be clear, I definitely think there's a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you're pointing to an important area of meaningful substantive disagreement!
Yes, benchtop devices have significant ramifications!
But that's consistent with my comment, which just meant to emphasise that I don't read Diggans and Leproust as advocating for a fully "public" hazard database, as slg's comment could be read to imply.
Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference.
One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly ...
Hi Nadia, thanks for writing this post! It's a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.
Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it's frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing...
Thanks for doing this survey and sharing the results, super interesting!
Regarding
maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something
Yes, I definitely think that there's a lot of potential for social desirability bias here! And I think this can happen even if the responses are anonymous, as people might avoid the cognitive dissonance that comes with admitting to "not having an inside view." One might even go as far as framing the results as "Who do people claim to defer to?"
Hi Elika,
Thanks for writing this, great stuff!
I would probably frame some things a bit differently (more below), but I think you raise some solid points, and I definitely support the general call for nuanced discussion.
I have a personal anecdote that really speaks to your "do your homework point." When doing research for our 2021 article on dual-use risks (thanks for referencing it!) , I was really excited about our argument for implementing "dual-use evaluation throughout the research life cycle, including the conception, funding, conduct, an...
Just echoing the experience of "it's been a pretty humbling experience to read more of the literature"; biosecurity policy has a long history of good ideas and nuanced discussions. On US gain-of-function policy in particular, I found myself particularly humbled by the 2015 article Gain-of-function experiments: time for a real debate, an adversarial collaboration between researchers involved in controversial viral gain-of-function work and biosecurity professionals who had argued such work should face more scrutiny. It's interesting to see where the contours of the debate have changed and how much they haven't changed in the past 7+ years.
Hi, thanks for your response and for the context about general university-related processes.
I'm pretty confident that if you ask almost anyone who has worked for FHI within the past two years, their overall account will match mine, even if they would frame some details differently. In my time there, I did not hear anyone present a significantly different version of events. (I don't just mean this rhetorically – it'd be great to hear from anyone else at FHI here!)
I'll just respond with some context to specific parts:
...First, the entire Oxford University had a
Yes! That’s what I meant to refer to with this: “Two of the senior researchers occasionally organize seminar discussions, which I think are popular.”
I’m glad they’re happening more regularly now! I’ll make an edit to make that clearer.
Hi,
Is it because the FHI lacked funding, or that it didn't manage to hire people[?]
My impression, as an employee that was never privy to much information beyond what I could gather from conversations with other researchers and a few occasional emails from the University: One of the biggest problems for FHI is that it has a poor relationship with the Department of Philosophy, its formal "home" within Oxford University. This breakdown of relations has meant that FHI has not been 'allowed' to hire since sometime in 2021 (I think I was among the last new peopl...
Hi!
I don't know the answer to your specific question, but can perhaps provide some circumstantial context, as someone who was employed at the Future of Humanity Institute (i.e., the Oxford University entity, not the Foundation you're asking about) between October 2021 and January 2023. I was full-time for about 3 months of that time and part-time for the rest, but worked out of the FHI offices the whole time.
In my ~1 year at FHI, I never heard anything about the Foundation, nor did I interact with it in any way.
More generally, if you are trying to ge...
“Open Phil posts all of its grants with some explanation.”
I do not think that this is accurate, I believe that some of their grants are not posted to their website.
Thanks, you're corect - they say they publish almost every grant, and have detailed explanations of most, but that some have less detail, and some are delayed due to sensitivity or undermining the purpose of the grant. See here, and especially the bolded points in the quote below:
"...we have stopped the practice of writing in detail about every grant that we make. We plan to continue to write in detail about many of them. We will try to focus on those that are especially representative of our thinking and strategy, or otherwise seem like they would b...
Oh, and I also quite liked your section on 'the balance of positive vs negative value in current lives'!
Thanks for writing this!
One thing I really agreed with.
For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.
I particularly appreciate your point about avoiding 'bait-and-switch' dynamics. I appreciate that it's important to build broad support for a movement, but I ultimately think that it's crucial to be transparent about what th...
Hey Joshua, appreciate you sharing your thoughts (strong upvoted)! I think we actually agree about the effects of sharing numerical credences more than you might think, but disagree about the solution.
...But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the "1-in-6" estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it n
In a nutshell: I agree that caring about the future doesn't mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.
Thanks for sharing this!
I think this quote from Piper is worth highlighting:
(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.
I broadly agree with this, except I think the first "if" should be replaced with "insofar as." Even as s...
In a nutshell: I agree that caring about the future doesn't mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.
Thanks for your reply.
My concern is not that the numbers don't work out. My concern is that the "$100m/0.01%" figure is not an estimate of how cost-effective 'general x-risk prevention' actually is in the way that this post implies.
It's not an empirical estimate, it's a proposed funding threshold, i.e. an answer to Linch's question "How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” But saying that we should fund interventions at that level of cost-effectiveness doesn't say whether are many (or any) such inter...
Thanks again for writing this. I just wanted to flag a potential issue with the $125 to $1,250 per human-life-equivalent-saved figure for ‘x-risk prevention.’
I think that figure is based on a willingness-to-pay proposal that already assumes some kind of longtermism.
You base the range Linch’s proposal of aiming to reducing x-risk by 0.01% per $100m-$1bn. As far as I can tell, these figures are based on a rough proposal of what we should be willing to pay for existential risk reduction: Linch refers to this post on “How many EA 2021 $s would you ...
Thanks for writing this! I think your point is crucial and too often missed or misrepresented in discussions on this.
A related key point is that the best approach to mitigating catastrophic/existential risks depends heavily on whether one comes at it from a longtermist angle or not. For example, this choice determines how compelling it is to focus on strategies or interventions for civilisational resilience and recovery.
To take the example of biosecurity: In some (but not all) cases, interventions to prevent catastrophe from biological risks look qui...
I wholeheartedly agree with Holly Morgan here! Thank you for writing this up and for sharing your personal context and perspective in a nuanced way.
Thanks for writing this, Linch! I’m starting a job in grantmaking and found this interesting and helpful.
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
I worry that it'd feel pretty fake for people who actually care about counterfactual impact. Money goes from EA sources to EA sources both ways.
Hi EKillian! Could you provide some more context on what you're interested in? Anyone will be welcome to write a submission. If you're more interested in helping others with their work, you could say a bit more about that here in the comments, and then perhaps someone will reach out.
In terms of serving as a judge in the competition, we haven't finalised the process for selecting judges – but it would be helpful if you could DM with some more information.
I appreciate hearing that and I've appreciated this brief exchange.
And I'm glad to hear that you're giving the book a try. I expect that you will disagree with some of Farmer's approaches – as I did – but I hope you will enjoy it nonetheless.
In general, I think the more 'activist' approach can be especially useful for (1) arguing, normatively, for what kind of world we want to be in and (2) prompting people to think harder about alternative ways of getting there – this is especially useful if some stakeholders haven't fully appreciated how bad existing options are for certain parties. Note that neither of these ways to contribute requires concrete solutions to create some value.
Also, to add:
...To be clear, I think we both need the more 'activist' approach of rejecting options that don
Thanks for this, I think you articulate your point well, and I understand what you're saying.
It seems that we disagree, here:
It seems to me that the world would be a much better place if, whenever someone refused to accept either horn of a moral or political dilemma, they were expected to provide an explicit answer to the question "What would you do instead?"
My point is exactly that I don't think that a world with a very strong version of this norm is necessarily better. Of course, I agree that it is best if you can propose a feasible alternative and I thi...
Thanks for your reply.
the very act of critiquing both 'horns' is what prompts us to find a third way, meaning that such a critique has a longer-term value, even in the absence of a provided short-term solution.
Yeah, this seems plausible to me, and is something I hadn't fully appreciated when I wrote my previous comment.
As a side note, I'm not familiar with Farmer's work, but this exchange (and Gavin's post) has motivated me to read Mountains Beyond Mountains.
Thanks for writing this, Gavin.
Reading (well, listening to) Mountains Beyond Mountains, I was deeply inspired by Farmer. I think a lot of people in the EA community would benefit from giving the book a chance.
Sure, I sometimes found his rejection of an explicit cost-effectiveness-based approach very frustrating, and it seemed (and still seems) that his strategy was at times poorly aligned with the goal of saving as many lives as possible. But it also taught me the importance of sometimes putting your foot in the ground and insisting that none of the option...
that we have to find an alternative if none of the present solutions meet a certain standard.
Insisting that we have to find an alternative seems justified only insofar as there are reasons for expecting alternatives to exist. I agree that, because some causes or interventions are hard to quantify, these reasons may be provided by things other than explicit cost-effectiveness analyses. But the fact that a certain standard hasn't been met doesn't seem, in itself, like one of these reasons.
Separately, one also needs to consider the costs of having a social no...
In case you (or anyone else) is interested, there'll be a panel discussion with a few biosecurity experts this Thursday: 2022 Next Generation for Biosecurity Competition: How can modern science help develop effective verification protocols to strengthen the Biological Weapons Convention? A Conversation with the Experts.
Hi James!
Good question. That estimate was for our entire process of producing the paper, including any relevant research. We wrote on a topic that somewhat overlapped with areas we already knew a bit about, so I can imagine there'd be extra hours if you write on something you're less familiar with. Also, I generally expect that the time investment might vary a lot between groups, so I wouldn't put too much weight on my rough estimate. Cheers!
Just here to say that this bit is simultaneously wonderfully hilarious and extraordinarily astute:
...The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever
Thanks for your comment, much appreciated!
I wholeheartedly agree that taking action to do something is often the most important, and most desperately lacking, component. Why is it lacking?
One potential cause could be if many people agree with a critical take, but those people are not the ones who have a lot of influence, e.g. because decision-making power is concentrated.
Another explanation could be that there are actually many people who agree with a critical take on the direction of effective altruism and would have the ability to do somethin...
Hey Linch, thanks for this thoughtful comment!
Yeah, I agree that my examples of steering sometimes are closely related to other terms in Holden's framework, particularly equity – indeed I have a comment about that buried deep in a footnote.
One reason I think this happens is because I think a super important concept for steering is the idea of moral uncertainty, and taking moral uncertainty seriously can imply putting a greater weight on equity than you otherwise might.
I guess another reason is that I tend to assume that effective steering is, as an empiric...
As discussed in a bit more detail in this post, I'd love to see themed prizes focusing specifically on critical engagement with effective altruism. This could be very broad (e.g., "Best critique of the effective altruism movement") or more narrow (e.g., something like "Best critique of a specific assumption that is widely made in the community" or "Best writeup on how applied longtermism could go wrong").
To the next content specialist on the Forum: I'd be happy to discuss further!
Sounds good! I'll post a comment and make sure to reach out to the next content specialist. Thanks!
Thanks, Aaron, this is a great suggestion! I'll try to get around to writing a very brief post about it this weekend.
On a related note, I'd be curious to hear what you think of the idea of using EA Forum prizes for this sort of purpose? Of course, there'd have to be some more work on specifying what exactly the prize should be for, etc.
If you know who will be working on the Forum going forward, I'd love to get a sense of whether they'd be interested in doing some version of this. If so, I'd be more than happy to set up a meeting to discuss.
James, thanks for pointing this out, and thanks, Pablo, that was indeed the link I intended to use! Fixed it now.
Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead's (in my opinion, deeply objectionable) quo...
+1. I always assumed that the 'Open' in 'Open Philanthropy' referred to an aspiration for a greater degree of transparency than is typically seen in philanthropy, and I generally support this aspiration being shared in the wider effective altruism philanthropic space. The EA Funds are an amazingly flexible way of funding extremely valuable work – but it seems to me that this flexibility would still benefit from the scrutiny and crowd-input that becomes possible through measures like public reports.
This list is certainly profoundly not-exhaustive for me but I'd rather post this version than spend ages thinking of a better answer and ultimately not posting anything. So, here goes:
Thanks for the context. I should note that I did not in any way intend to disparage Beckstead's personal character or motivations, which I definitely assume to be both admirable and altruistic.
As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author's personal actions.
Happy to have a go; the "in/out of context" is a large part of the problem here. (Note that I don't think I agree with Beckstead's argument for reasons given towards the end).
(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.
If I were to apply the argument to the 12t...
Thanks for making this list, Tessa – so much that I have yet to read! And thanks for including our article :)
I thought I might suggest a few other readings on vaccine development:
Also, I think you omitted a super important 80k podcast: ...
Thanks for writing this, I found it interesting!