All of jtm's Comments + Replies

jtm
3mo4
0
0
1

Thanks for writing this, I found it interesting!

This is a very welcome contribution to a professional field (ie., the GCBR-focused parts of the pandemic preparedness and biosecurity space) that can often feel opaque and poorly coordinated — sincere thanks to Max and everyone else who helped make it!

Thanks for sharing this and congrats on a very longstanding research effort!

Are you able to provide more details on the backgrounds of the “biorisk experts”? For example, the kinds of organisations they work for, their seniority (eg years of professional experience), or their prior engagement with global catastrophic biological risks specifically (as opposed to pandemic preparedness or biorisk management more broadly).

I ask because I’m wondering about potential selection effects with respect to level of concern about catastrophe/extinction from biology. Wi... (read more)

3
Bridget_Williams
6mo
Hi Joshua! Thanks for the kind words and for this question. For confidentiality reasons, the team can’t provide details of the institutions and roles of XPT participants. However, because several of our recruitment channels were EA-adjacent or directly related to existential risks (e.g. we recruited some experts via a post on the EA Forum and reached out to some organizations working on x-risks), our prior is that the XPT biorisk experts are more concerned about catastrophic and existential risks than would, say, a sample of attendees at the Global Health Security Conference. So, you’re right that it shouldn’t be taken as representative sample of biosecurity or biorisk experts. It is also unclear to us what that sampling frame would look like, in general. I can see this wasn’t clear in the post, so I’ve edited/added some text to the ‘Participants’ and concluding sections in the post.   Edits (bold is new text): * Participants section: "Experts were recruited through advertising and outreach to relevant organizations organizations working on existential risk, and relevant academic departments and research labs. … As this study partially recruited experts based on the study of existential and catastrophic risks, this participant group shouldn’t be taken as a representative sample of people who may be considered biorisk experts." * Concluding section: "It’s also worth noting that for some questions, there were only a small number of expert respondents, and even the full group of biorisk experts may not is unlikely to be representative of the field, given we aimed to recruit some experts with an interest in existential risk."
jtm
7mo15
1
0

Hi! 


This is Joshua, I work on the biosecurity program at the philanthropic advisor Effective Giving. In 2021, we recommended two grants to UNIDIR's work on biological risks, e.g. this report on stakeholder perspectives on the Biological Weapons Convention, which you might find interesting.

1
Jessica Wen
7mo
Hi Joshua, Thanks for the information and the link to the report! Glad to hear that they are a part of the EA extended universe :)

To be clear, I definitely think there's a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you're pointing to an important area of meaningful substantive disagreement!

Yes, benchtop devices have significant ramifications! 

  • Agreed, storing the database on-device does sound much harder to secure than some kind of distributed storage. Though, I can imagine that some customers will demand airgapped on-device solutions, where this challenge could present itself anyway.
  • Agreed, sending exact synthesis orders from devices to screeners seems undesirable/unviable, for a host of reasons. 

But that's consistent with my comment, which just meant to emphasise that I don't read Diggans and Leproust as advocating for a fully "public" hazard database, as slg's comment could be read to imply.

1
CronoDAS
7mo
If your benchtop device user can modify the hardware to attempt to defeat the screening mechanism, the problem becomes orders of magnitude harder. I imagine that making a DNA sequence generating device that can't be modified to make smallpox even if it's in the middle of Pyongyang and the malicious user is the North Korean government is an essentially unsolvable problem - if nothing else, they can try to reverse engineer the device and build a similar one without any screening mechanism at all.

Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference.

One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly ... (read more)

5
slg
7mo
That's a good pointer, thanks! I'll drop the reference to Diggans and Leproust for now.
7
Jeff Kaufman
7mo
I think benchtop synthesizers would change this quite a bit? Because then you need one of: * Ship the database on every benchtop, where it is at much higher risk of compromise. * Have benchtops send each synthesis request out for screening. * Something like Secure DNA's approach, where the benchtop sends the order out for screening in a format that does not disclose it's contents.
jtm
8mo24
9
0

Hi Nadia, thanks for writing this post! It's a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.

Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it's frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing... (read more)

Thanks for researching and writing this!

1
Chloe Lee
10mo
Thank you so much! :) 

Thanks for doing this survey and sharing the results, super interesting!

Regarding

maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something

Yes, I definitely think that there's a lot of potential for social desirability bias here! And I think this can happen even if the responses are anonymous, as people might avoid the cognitive dissonance that comes with admitting to "not having an inside view." One might even go as far as framing the results as  "Who do people claim to defer to?"

jtm
1y31
11
0

Hi Elika, 

Thanks for writing this, great stuff!  

I would probably frame some things a bit differently (more below), but I think you raise some solid points, and I definitely support the general call for nuanced discussion.

I have a personal anecdote that really speaks to your "do your homework point." When doing research for our 2021 article on dual-use risks (thanks for referencing it!) , I was really excited about our argument for implementing "dual-use evaluation throughout the research life cycle, including the conception, funding, conduct, an... (read more)

7
Elika
1y
Thanks!! Strongly agree on your points of the intrinsic value of understanding and being nuanced in this space, I just didn't have the words to frame it as well as you put it :)

Just echoing the experience of "it's been a pretty humbling experience to read more of the literature"; biosecurity policy has a long history of good ideas and nuanced discussions. On US gain-of-function policy in particular, I found myself particularly humbled by the 2015 article Gain-of-function experiments: time for a real debate, an adversarial collaboration between researchers involved in controversial viral gain-of-function work and biosecurity professionals who had argued such work should face more scrutiny. It's interesting to see where the contours of the debate have changed and how much they haven't changed in the past 7+ years.

Hi, thanks for your response and for the context about general university-related processes.

I'm pretty confident that if you ask almost anyone who has worked for FHI within the past two years, their overall account will match mine, even if they would frame some details differently. In my time there, I did not hear anyone present a significantly different version of events. (I don't just mean this rhetorically – it'd be great to hear from anyone else at FHI here!)

I'll just respond with some context to specific parts:

First, the entire Oxford University had a

... (read more)

Yes! That’s what I meant to refer to with this: “Two of the senior researchers occasionally organize seminar discussions, which I think are popular.”

I’m glad they’re happening more regularly now! I’ll make an edit to make that clearer.

3
GideonF
1y
Ye, Anders and Toby organise it, and the sessions have between 4-15 attendees. Plus they have had external people run sessions recently (I ran one, SJ Beard from CSER ran one)
jtm
1y21
2
0

Hi,


Is it because the FHI lacked funding, or that it didn't manage to hire people[?]


My impression, as an employee that was never privy to much information beyond what I could gather from conversations with other researchers and a few occasional emails from the University: One of the biggest problems for FHI is that it has a poor relationship with the Department of Philosophy, its formal "home" within Oxford University. This breakdown of relations has meant that FHI has not been 'allowed' to hire since sometime in 2021 (I think I was among the last new peopl... (read more)

2
JWS
1y
Thank you for your initial answer and response jtm, I've found both very valuable to read. As always, it'd be really interesting to know more but I'll leave my questions here and not pry further. Thanks for sharing your insight, and I hope you're enjoying your new role and doing good work there :)
Answer by jtmFeb 22, 202355
7
0

Hi!

I don't know the answer to your specific question, but can perhaps provide some circumstantial context, as someone who was employed at the Future of Humanity Institute (i.e., the Oxford University entity, not the Foundation you're asking about) between October 2021 and January 2023. I was full-time for about 3 months of that time and part-time for the rest, but worked out of the FHI offices the whole time. 

In my ~1 year at FHI, I never heard anything about the Foundation, nor did I interact with it in any way.

More generally, if you are trying to ge... (read more)

2
GideonF
1y
Big picture salons (basically a seminar) happen every week
6
JWS
1y
Is this widely known? It certainly wasn't to me and actually made me do a Forum double-take. Isn't this a bad thing that a major organisation to develop EA ideas has essentially stopped working? Sorry if this is old news to other Forum-goers, but this was certainly an update for me. Is it because the FHI lacked funding, or that it didn't manage to hire people, or that people found better alternatives to their FHI roles?
jtm
1y14
3
1

“Open Phil posts all of its grants with some explanation.”

I do not think that this is accurate, I believe that some of their grants are not posted to their website.

Thanks, you're corect - they say they publish almost every grant, and have detailed explanations of most, but that some have less detail, and some are delayed due to sensitivity or undermining the purpose of the grant. See here, and especially the bolded points in the quote below:
 
"...we have stopped the practice of writing in detail about every grant that we make. We plan to continue to write in detail about many of them. We will try to focus on those that are especially representative of our thinking and strategy, or otherwise seem like they would b... (read more)

Thank you for writing this, I think it's very important.

Oh, and I also quite liked your section on 'the balance of positive vs negative value in current lives'!

jtm
2y51
0
0

Thanks for writing this!

One thing I really agreed with.

 For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.

I particularly appreciate your point about avoiding 'bait-and-switch' dynamics. I appreciate that it's important to build broad support for a movement, but I ultimately think that it's crucial to be transparent about what th... (read more)

Hey Joshua, appreciate you sharing your thoughts (strong upvoted)! I think we actually agree about the effects of sharing numerical credences more than you might think, but disagree about the solution.

But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the "1-in-6" estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it n

... (read more)
3
jtm
2y
Oh, and I also quite liked your section on 'the balance of positive vs negative value in current lives'!
jtm
2y11
0
0

In a nutshell: I agree that caring about the future doesn't mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.

jtm
2y16
0
0

Thanks for sharing this!

I think this quote from Piper is worth highlighting:

(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.


I broadly agree with this, except I think the first "if" should be replaced with "insofar as."  Even as s... (read more)

jtm
2y11
0
0

In a nutshell: I agree that caring about the future doesn't mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.

Thanks for your reply.

My concern is not that the numbers don't work out. My concern is that the "$100m/0.01%" figure is not an estimate of how cost-effective 'general x-risk prevention' actually is in the way that this post implies.

It's not an empirical estimate, it's a proposed  funding threshold, i.e. an answer to Linch's question "How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” But saying that we should fund interventions at that level of cost-effectiveness doesn't say whether are many (or any) such inter... (read more)

2
MichaelStJules
2y
I agree with your points. I was responding to this point, but should have quoted it to be clearer: "But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for." I think the numbers can work out without considering future lives or at least anything other than deaths.
jtm
2y10
0
0

Thanks again for writing this. I just wanted to flag a potential issue with the $125 to $1,250 per human-life-equivalent-saved figure for ‘x-risk prevention.’ 

I think that figure is based on a willingness-to-pay proposal that already assumes some kind of longtermism.
 

You base the range Linch’s proposal of aiming to reducing x-risk by 0.01% per $100m-$1bn. As far as I can tell, these figures are based on a rough proposal of what we should be willing to pay for existential risk reduction: Linch refers to this post on “How many EA 2021 $s would you ... (read more)

6
MichaelStJules
2y
I think the numbers work out assuming x-risk means (almost) everyone being killed and the percent reduction is absolute (percentage point), not relative: $100,000,000 / (0.01% * 8 billion people) = $125/person
1
elifland
2y
I agree! I bolded the rough in the header because I didn’t want people to take the numbers too seriously but agree that probably wasn’t enough. I tried to add a caption to the table before posting but couldn’t figure out how; is it possible / if so how do I do it?
2
Linch
2y
I think the understanding is based on how many $$s the longtermist/x-risk portion of EAs have access to, and then trying to rationally allocate resources according to that constraint. I'm not entirely sure what you mean by "accounting for future lives," but yes, there's an implicit assumption that under no realistic ranges of empirical uncertainty would it make sense to e.g. donate to AMF over longtermist interventions.  A moderate penalty to my numbers (from a presentist lens) is that at least some of the interventions I'm most excited about on the margin are from a civilizational resilience/recovery angle. However, I don't think this is a large effectiveness penalty, since many other people are similarly or much more excited on the margin about AI risk interventions (which has much more the property that either approximately everybody dies or approximately no one dies).  So, I don't think elifland's analysis here is clearly methodologically wrong. Even though my numbers (and other analysis like mine) were based on the assumption that longtermist $$s were used for longtermist goals, it could still be the case that they are more effective for preventing deaths of existing people than existing global health interventions are. At least first order, it should not be that surprising. That is, global health interventions were chosen from the constraint of the first existing interventions with a large evidential base, whereas global catastrophic risk and existential-risk reducing interventions were chosen from (among others) the basis of dialing back ambiquity aversion and weirdness aversion to close to zero. I think the main question/crux is how much you want to "penalize for  (lack of) rigor." Givewell-style analysis have years of dedicated work put into them. Much of my gut pulls grew out of an afternoon of relatively clear thinking (and then maybe a few more days of significantly-lower-quality thinking and conversations, etc, that adjusted my numbers somewhat but n
jtm
2y10
0
0

Thanks for writing this! I think your point is crucial and too often missed or misrepresented in discussions on this.

A related key point is that the best approach to mitigating catastrophic/existential risks depends heavily on whether one comes at it from a longtermist angle or not. For example, this choice determines how compelling it is to focus on strategies or interventions for civilisational resilience and recovery

To take the example of biosecurity: In some (but not all) cases, interventions to prevent catastrophe from biological risks look qui... (read more)

jtm
2y12
0
0

Thanks for taking the time to write this up!

I wholeheartedly agree with Holly Morgan here! Thank you for writing this up and for sharing your personal context and perspective in a nuanced way. 

Thanks for writing this, Linch! I’m starting a job in grantmaking and found this interesting and helpful.

jtm
2y32
0
0

+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.

I worry that it'd feel pretty fake for people who actually care about counterfactual impact. Money goes from EA sources to EA sources both ways. 

Hi EKillian! Could you provide some more context on what you're interested in? Anyone will be welcome to write a submission. If you're more interested in helping others with their work, you could say a bit more about that here in the comments, and then perhaps someone will reach out.

In terms of serving as a judge in the competition, we haven't finalised the process for selecting judges – but it would be helpful if you could DM with some more information.

I appreciate hearing that and I've appreciated this brief exchange.

And I'm glad to hear that you're giving the book a try. I expect that you will disagree with some of Farmer's approaches – as I did – but I hope you will enjoy it nonetheless.

In general, I think the more 'activist' approach can be especially useful for (1) arguing, normatively, for what kind of world we want to be in and (2)  prompting people to think harder about alternative ways of getting there – this is especially useful if some stakeholders haven't fully appreciated how bad existing options are for certain parties. Note that neither of these ways to contribute requires concrete solutions to create some value.

Also, to add: 

To be clear, I think we both need the more 'activist' approach of rejecting options that don

... (read more)

Thanks for this, I think you articulate your point well, and I understand what you're saying.

It seems that we disagree, here:

It seems to me that the world would be a much better place if, whenever someone refused to accept either horn of a moral or political dilemma, they were expected to provide an explicit answer to the question "What would you do instead?"

My point is exactly that I don't think that a world with a very strong version of this norm is necessarily better. Of course, I agree that it is best if you can propose a feasible alternative and I thi... (read more)

Thanks for your reply.

the very act of critiquing both 'horns' is what prompts us to find a third way, meaning that such a critique has a longer-term value, even in the absence of a provided short-term solution.

Yeah, this seems plausible to me, and is something I hadn't fully appreciated when I wrote my previous comment.

As a side note, I'm not familiar with Farmer's work, but this exchange (and Gavin's post) has motivated me to read Mountains Beyond Mountains.

2
jtm
2y
In general, I think the more 'activist' approach can be especially useful for (1) arguing, normatively, for what kind of world we want to be in and (2)  prompting people to think harder about alternative ways of getting there – this is especially useful if some stakeholders haven't fully appreciated how bad existing options are for certain parties. Note that neither of these ways to contribute requires concrete solutions to create some value. Also, to add:  For example, we both need advocates to argue that it's outrageous and unacceptable how the scarcity funds allocated towards global poverty leaves so many without enough, as well as GiveWell-style optimisers to figure out how to do the most with what we currently have.  In a nutshell: Maximise subject to given constraints, and push to relax those constraints.
jtm
2y42
0
0

Thanks for writing this, Gavin.

Reading (well, listening to) Mountains Beyond Mountains, I was deeply inspired by Farmer. I think a lot of people in the EA community would benefit from giving the book a chance.

Sure, I sometimes found his rejection of an explicit cost-effectiveness-based approach very frustrating, and it seemed (and still seems) that his strategy was at times poorly aligned with the goal of saving as many lives as possible. But it also taught me the importance of sometimes putting your foot in the ground and insisting that none of the option... (read more)

that we have to find an alternative if none of the present solutions meet a certain standard.

Insisting that we have to find an alternative seems justified only insofar as there are reasons for expecting alternatives to exist. I agree that, because some causes or interventions are hard to quantify, these reasons may be provided by things other than explicit cost-effectiveness analyses. But the fact that a certain standard hasn't been met doesn't seem, in itself, like one of these reasons.

Separately, one also needs to consider the costs of having a social no... (read more)

Hi James!

Good question. That estimate was for our entire process of producing the paper, including any relevant research. We wrote on a topic that somewhat overlapped with areas we already knew a bit about, so I can imagine there'd be extra hours if you write on something you're less familiar with.  Also, I generally expect that the time investment might vary a lot between groups, so I wouldn't put too much weight on my rough estimate. Cheers!

1
jtm
2y
In case you (or anyone else) is interested, there'll be a panel discussion with a few biosecurity experts this Thursday: 2022 Next Generation for Biosecurity Competition: How can modern science help develop effective verification protocols to strengthen the Biological Weapons Convention? A Conversation with the Experts. 
jtm
2y35
0
0

Just here to say that this bit is simultaneously wonderfully hilarious and  extraordinarily astute:

The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever

... (read more)

Thanks for your comment, much appreciated!

I wholeheartedly agree that taking action to do something is often the most important, and most desperately lacking, component. Why is it lacking? 

One potential cause could be if many people agree with a critical take, but those people are not the ones who have a lot of influence, e.g. because decision-making power is concentrated. 

Another explanation could be that there are actually many people who agree with a critical take on the direction of effective altruism and would have the ability to do somethin... (read more)

Hey Linch, thanks for this thoughtful comment!

Yeah, I agree that my examples of steering sometimes are closely related to other terms in Holden's framework, particularly equity – indeed I have a comment about that buried deep in a footnote.

One reason I think this happens is because I think a super important concept for steering is the idea of moral uncertainty, and taking moral uncertainty seriously can imply putting a greater weight on equity than you otherwise might.

I guess another reason is that I tend to assume that effective steering is, as an empiric... (read more)

As discussed in a bit more detail in this post, I'd love to see themed prizes focusing specifically on critical engagement with effective altruism. This could be very broad (e.g., "Best critique of the effective altruism movement") or more narrow (e.g., something like "Best critique of a specific assumption that is widely made in the community" or "Best writeup on how applied longtermism could go wrong"). 

To the next content specialist on the Forum: I'd be happy to discuss further!

Sounds good! I'll post a comment and make sure to reach out to the next content specialist. Thanks!

Thanks, Aaron, this is a great suggestion! I'll try to get around to writing a very brief post about it this weekend.

On a related note, I'd be curious to hear what you think of the idea of using EA Forum prizes for this sort of purpose? Of course, there'd have to be some more work on specifying what exactly the prize should be for, etc.

If you know who will be working on the Forum going forward, I'd love to get a sense of whether they'd be interested in doing some version of this. If so, I'd be more than happy to set up a meeting to discuss.

4
Aaron Gertler
2y
A competition for steering-type material seems like a reasonable contest theme (along the lines of the contests we had for creative writing and Wiki entries). Now that I won't be running any future events, I'm not sure what the best place to put ideas like this is. Perhaps a comment here (I imagine that future EA Forum leaders will check that thread when thinking about contests). I've also added your idea to the document I'll send our next Content Specialist, but that's a really long document, so having the idea in more places seems good! (Finally, when the next specialist is chosen, they'll probably introduce themselves on the Forum, so you could set a reminder to contact them directly with this idea later on!)

James, thanks for pointing this out,  and thanks, Pablo,  that was indeed the link I intended to use! Fixed it now.

jtm
2y27
0
0

Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead's (in my opinion, deeply objectionable) quo... (read more)

jtm
2y21
0
0

+1. I always assumed that the 'Open' in 'Open Philanthropy' referred to an aspiration for a greater degree of transparency than is typically seen in philanthropy, and I generally support this aspiration being shared in the wider effective altruism philanthropic space. The EA Funds are an amazingly flexible way of funding extremely valuable work – but it seems to me that this flexibility would still benefit from the scrutiny and crowd-input that becomes possible through measures like public reports.

Answer by jtmNov 07, 202123
0
0

This list is certainly profoundly not-exhaustive for me but I'd rather post this version than spend ages thinking of a better answer and ultimately not posting anything. So, here goes:

  • Cassidy Nelson and Gregory Lewis. When I was considering applying for my current role at the Future of Humanity Institute (FHI), the fact that they were leading the biorisk team was a pretty big consideration in favour of applying. I had some reservations about (my perceived-from-afar version of) the culture at FHI, and these two people just made me really excited about worki
... (read more)

Thanks for the context. I should note that I did not in any way intend to disparage Beckstead's personal character or motivations, which I definitely assume to be both admirable and altruistic.

As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author's personal actions.

3
Julia_Wise
3y
Understood!

Happy to have a go; the "in/out of context" is a large part of the problem here. (Note that I don't think I agree with Beckstead's argument for reasons given towards the end).

(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.

If I were to apply the argument to the 12t... (read more)

Thanks for making this list, Tessa – so much that I have yet to read! And thanks for including our article :)

I thought I might suggest a few other readings on vaccine development:

Also, I think you omitted a super important 80k podcast: ... (read more)

5
Tessa
3y
Excellent― one thing I was hoping to get from posting this was links to resources I hadn't encountered yet, so I really appreciate this.
Load more