All of Ben Stewart's Comments + Replies

Thanks for engaging! Yep I agree with what you said - cross-pollination and interdisciplinary engagement and all that. For context I haven't spent a lot of time looking at the Collins' work, hence light stakes/investment for this discussion. But my impression of their work makes me skeptical that they are "highly accomplished" in any field and I am also very surprised that they would be "thinkers [you] respect" (to borrow from Austin's comment).

In terms of their ideas, I think that hosting someone as a speaker at your conference doesn't mean that you endor... (read more)

8
Saul Munn
13h
Meta: Thanks for your response! I recognize that you are under no obligation to comment here, which makes me all the more appreciative that you're continuing the conversation. <3 *** I've engaged with the Collins' content for about a minute or two in total, and with them personally for the equivalent of half an email chain and a tenth of a conversation. Interpersonally, I've found them quite friendly/reasonable people. Their shared panel at the last Manifest was one of the highest rated of the conference; multiple people came up to me to tell me that they really enjoyed it. On their actual content, I think Austin and/or Rachel have much more knowledge/takes/context — I deferred to them re: "does their content check out." Those were my reasons for inviting them back. I'll add that there is a class of people who have strongly-worded, warped, and especially inflammatory headlines (or tweets, etc), but whose underlying perspectives/object-level views can often be much more reasonable — or at least I strongly respect the methods by which they go about their thoughts. There's a mental wince to reading one of their headlines, where in my head I go "...oh, god. Man, I know what you're trying to say, but couldn't you... I dunno, say it nicely? in a less inflammatory way, or something?" And I often find that these people are actually quite kind/nice IRL — but you read their Twitter, or something, and you think "...oh man, these are some pretty wild takes." I'm not too sure how to act in these scenarios/how to react to these types of people. Still, the combination of [nice/bright IRL] + [high respect for Rachel & Austin's perspective on object-level things] = the Collins' probably fall into the category of "I really dislike the fact that they use clickbaity, inflammatory titles to farm engagement, but they (probably) have high-quality object-level takes and I know that they're reasonable people IRL." I appreciate your bringing to attention their YouTube channel, which I h

Thanks, yeah I'm surprised the upsides outweigh the downsides but not my conference [own views]

1
Saul Munn
17h
Hi Ben! Thanks for your comment. I'm curious what you think the upsides and the downsides are? I'll also add to what Austin said — in general, I think the strategy of [inviting highly accomplished person in field X to a conference about field Y] is underrated to cross-pollinate among and between fields. I think this is especially true of something like prediction markets, where by necessity they're applicable across disciplines; prediction markets are useless absent something on which to predict. This is the main reason I'm in favor of inviting e.g. Rob Miles, Patrick McKenzie, Evan Conrad, Xander Balwit & Nico McCarty, Dwarkesh Patel, etc — many of whom don't actively directly straightforwardly obviously clearly work in prediction markets/forecasting (the way that e.g. Robin Hanson, Nate Silver, or Allison Duettmann do). It's pretty valuable to import intellectual diversity into the prediction market/forecasting community, as well as to export the insights of prediction markets/forecasting to other fields. (And also, a note to both Ben & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )
9
Rockwell
1d
I'd like to second Ben and make explicit the concern about platforming ideologues whose public reputation is seen as pro-eugenics.

Why do you think Simone and Malcolm Collins are good speakers for this conference? 

5
Austin
1d
Hey Ben! I'm guessing you're asking because the Collins's don't seem particularly on-topic for the conference? For Manifest, we'll typically invite a range of speakers & guests, some of whom don't have strong pre-existing connections to forecasting; perhaps they have interesting things to share from outside the realm of forecasting, or are otherwise thinkers we respect, and are curious to learn more about prediction markets.  (Though in this specific case, Simone and Malcolm have published a great book covering different forms of governance, which is topical to our interest in futarchy; and I believe their education nonprofit makes use of internal prediction markets for predicting student outcomes!)

How do you expect incubating for-profit orgs to differ from AIM's experience incubating charities, and what do you plan to do to execute well despite these differences?

This was great! Interesting to see the inter-expert disagreements laid out too

Nice! This is helpful, and I love the reasoning transparency. How did you get to the 80% CI?(sorry if I missed this somewhere)

4
JoshuaBlake
2mo
Thank you Ben! The 80% CI[1] is an output from the model. Rough outline is. 1. Start with an uniformative prior on the rate of accidental pandemics. 2. Update this prior based on the number of accidental pandemics and amount of "risky research units" we've seen; this is roughly to Laplace's rule of succession in continuous time. 3. Project forward the number of risky research units by extrapolating the exponential growth. 4. If you include the uncertainty in the rate of accidental pandemics per risky research unit, and random variation, then it turns out the number of events is a negative binomial distribution. 5. Include the most likely number of pandemics to occur until the probability is over 80%. Due to it being a discrete distribution, this is a conservative interval (i.e. covers more than 80% probability). For more details, here is the maths and code for the blogpost and here is a blogpost outlining the general procedure. ---------------------------------------- 1. Technically a credible interval (CrI), not confidence interval because it's Bayesian. ↩︎

The problem is you are framing these ideas as advice you're giving to others - that if they took seriously could affect something important (i.e. a job interview). If you're going to presume to advise others, you should be more confident the advice is true/helpful.

3
yanni
2mo
This is good feedback! I'll make it clearer that this is something to consider, not to do without consideration 👍

Nice post! An eudaemonic focus pairs nicely with a capabilities approach to human welfare - where we might conceive of global health and development as enabling individuals' substantive freedom to lead the lives they wish to. Ryan Briggs gives a great intro here.

I thought this was a very useful review and would strongly encourage others to read it, if they’ve engaged with the previous posts on this subject. I wouldn't have seen it without your post, so thanks! I think publishing on the forum in full (or relevant sections) would be great - though I'll leave it to the author/others to decide. 

I agree, and I got permission from Ozy to include the full text, so now it's here.

4
Rebecca
3mo
I agree that it would be cool if the whole text was on the forum, it’s an extremely good analysis of the object level situation

I loved this. For hungry readers, Peter Godfrey-Smith's 'Other Minds' is great (so too the subsequent 'Metazoa').

Awesome work, thanks! And this model resonates with my experience getting more involved with bio over the last few years.

3
Sofya Lebedeva
3mo
Wonderful to hear that!

Yeah, though to be fair the CEA for Malawi was b/c it was LEEP's literal first campaign. I'd imagine LEEP has CEAs for all their country work which include adjustments for likelihood of success, though I don't know whether they intend to publish them any time soon.

Yeah makes sense, and that the early research could have been heavily discounted by pessimism about a charity achieving big wins.

5
NickLaing
4mo
This is one of the reasons I don't love post-hoc Cost-effectiveness assessments of successful individual campaigns and policy changes which don't take into account the probability that their (now successful) campaign might have failed - which I have seen a number of times on the lead front. For every win there might be 5, or 10 or 20 failures (which is fine). If you just zero in on the successes then cost-effective numbers look unrealistically rosy. If the initial assessment say for LEEP in Malawi assessed say a 20% chance of success, then this should be factored into their final calculation I think, then they can perhaps update it if they realise their success rate increases. Otherwise we end up not costing in the failed campaigns, while the successful ones appear ludicrously cost-effective.

One example I know of off the top of my head is LEEP - their CEA for their Malawi campaign found a median of $14/DALY. CE's original report on lead paint regulation suggested $156/DALY (as a central estimate, I think). That direction and magnitude is pretty surprising to me. I expect it would be explicable based on the details of the different approaches/considerations, but I'd need to look into the details. Maybe a motivating story is that LEEP's Malawi campaign was surprisingly fast and effective compared to the original report's hopes?

Another is Family ... (read more)

2
NickLaing
4mo
LEEP is a pretty unusual situation in general I think, and I'm not sure is super generalisable. If you get an easy-ish win with lead things, the cost-effectiveness can be insane (see the bangladesh cumin situation).

I sympathise with this view, but I think I see it in more continuous terms than ex ante vs. ex post, and maybe akin to quality. This is because even ex post, I think there would still be substantial guess-work and assumptions, and the bottom line still relies on interpretation. But the difference for ex post is how empirically informed that analysis can be, and how specific. I.e an ex post analysis can ground estimates on data for that specific org, with that program, in that community. Ex ante analyses can also differ in quality for how empirically inform... (read more)

2
Karthik Tadepalli
4mo
Yes, I agree quality matters a lot, but I think people are universally aware of that - I just wanted to draw attention to the ex-ante/ex-post distinction, which I hadn't seen raised before. The CE approach is a good idea, because actually I think the interventions changing a lot from research to implementation is a key part of why ex-ante estimates are unreliable. I don't know if both estimates are available but it would be great if they are!

I think a similar view is found in 'Why we can't take expected value estimates literally even when they're unbiased' I.e. we should have a pretty low prior that any particular intervention is above (e.g.) 10x cash transfers, but the strength and robustness of top charities' CEAs are sufficient to clear them over the bar. And most CEAs of specific interventions written up on the forum aren't compelling enough to bring the estimate all that much higher from the low prior.
I agree it'd be informative to see what 'naive' versions of top charity CEAs would be li... (read more)

4
Karthik Tadepalli
4mo
That's an interesting separate point, I certainly agree that our prior should have low mass around 10x cash and above and that has its own large effect. But I don't feel like I would make this point contingent on the quality of the CEA; I think even the highest-quality ex-ante CEA can't avoid these issues. Some CEAs are probably high-quality because there are real decisions attached to them (e.g. Charity Entrepreneurship's ex-ante CEAs of their prospective charities) and I don't think I would be convinced by those either. Neat exercise with 2012 GiveWell. Does 2023 have a country breakdown? Because the main intertemporal confounder I would want to guard against is the change in country mix. I would compare 2012 to the 2023 country in which AMF had the most activity in 2012, which I don't know off the top of my head. But 3x seems reasonable to me.

I weakly agree with the claim that the offense/defense balance is not a useful way to project the implications of AI. However, I disagree strongly with how the post got there. Considering only cyber-security and per-capita death rate is not a sufficient basis for the claim that there is "little historical evidence for large changes in the O/D balance, even in response to technological revolutions."

There are good examples where technology greatly shifts the nature of war: castles favouring defense, before becoming negated by cannons. The machine gun and bar... (read more)

3
Harrison Durland
3mo
Thank you so much for articulating a bunch of the points I was going to make! I would probably just further drive home the last paragraph: it’s really obvious that the “number of people a lone maniac can kill in given time” (in America) has skyrocketed with the development of high fire-rate weapons (let alone knowledge of explosives). It could be true that the O/D balance for states doesn’t change (I disagree) while the O/D balance for individuals skyrockets.

Nitpick. I think you meant bioterrorism, not terrorism which includes more data.

Thanks! Fixed.

I don't know the nuclear field well, so don't have much to add. If I'm following your comment though, it seems like you have your own estimate of the chance of nuclear war raising 47+ Tg  of soot, and on the basis of that infer the implied probability supers give to extinction conditional on such a war. Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chanc... (read more)

2
Vasco Grilo
4mo
Fair point! Here is another way of putting my point. I estimated a probability of 3.29*10^-6 for a 50 % population loss due to the climatic effects of nuclear war before 2050, so around 0.001 % (= 3.29*10^-6*75/25) before 2100. Superforecasters' 0.074 % nuclear extinction risk before 2100 is 74 times my risk for a 50 % population loss due to climatic effects. My estimate may be off to some extent, and I only focussed on the climatic effects, not the indirect deaths caused by infrastructure destruction, but my best guess has to be many OOMs off for superforecasters prediction to be in the right OOM. This makes me believe superforecasters' are overestimating nuclear extinction risk. Yes, in the same way that the risk of global warming is often overestimated due to neglecting adaptation. I expect the defensive side to be under-counted, but not necessarily due to lack of quantitative models. However, I think using quantitative models makes it less likely that the defensive side is under-counted. I have not thought much about this; I am just expressing my intuitions.

Hi Vasco, nice post thanks for writing it! I haven't had the time to look into all your details so these are some thoughts written quickly.

I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn't look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published - I'm on it!). That work was as an independent contractor for OP, but I now work for them on the GCR Cause Prio team. All that to say these are my own views, not OP'... (read more)

4
Vasco Grilo
4mo
Great points, and thanks for the reading suggestions, Ben! I am also happy to know you plan to publish a report describing your findings. I qualitatively agree with everything you have said. However, I would like to see a detailed quantitative model estimating AI or bio extinction risk (which handled well infohazards). Otherwise, I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety. To clarify, my best guess is also many OOMs higher than the headline number of my post. I think XPT's superforecaster prediction of 0.01 % human extinction risk due an engineered pathogen by 2100 (Table 3) is reasonable. However, I wonder whether superforecasters are overestimating the risk because their nuclear extinction risk by 2100 of 0.074 % seems way too high. I estimated a 0.130 % chance of a nuclear war before 2050 leading to an injection of soot into the stratosphere of at least 47 Tg, so around 0.39 % (= 0.00130*75/25) before 2100. So, for the superforecasters to be right, extinction conditional on at least 47 Tg would have to be around 20 % (= 0.074/0.39) likely. This appears extremely pessimistic. From Xia 2022 (see top tick in the 3rd bar from the right in Fig. 5a): This scenario is the most optimistic in Xia 2022, but it is pessimist in a number of ways (search for "High:" here): * “Scenarios assume that all stored food is consumed in Year 1”, i.e. no rationing. * “We do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34”. * “Large-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operational”.

Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:

  • Disarmament, Demobilization, and Reintegration (DDR) Programs 
  • Community-Driven Development
  • Cognitive Behavioral Therapy
  • Cash Transfers and/or Job Training
  • Alternative Dispute Resolution (ADR)
  • Contact Interventions and Mass Media
  • Investigative Journalism
  • Mediation and Diplomacy

Open Phil had this issue - they now use 'Global Health & Wellbeing' and 'Global Catastrophic Risks', which I think captures the substantive focus of each.

As one data point: I was interested in global health from a young age, and found 80K during med school in 2019, which led to opportunities in biosecurity research, and now I'm a researcher on global catastrophic risks. I'm really glad I've made this transition! However, it's possible that I would have not applied to 80K (and not gone down this path) if I had gotten the impression they weren't interested in near-termist causes. 

Looking back at my 80K 1on1 application materials, I can see I was aware that 80K thought global health was less neglected tha... (read more)

4
NickLaing
6mo
Thanks for your example - a 20-40% chance you wouldn't have applied is quite high. And I do think if anyone (EA or otherwise) looked through the 80,000 hours website you probably would get the impression that they weren't interested at all in near-termist causes.   Also I think you've nailed it with the "ideal" example career-shift here. "Ideally a reader who shifted from a neutral or only very-mildly-good career to a great career would be better (as they do for their other examples). I'd guess 80K know some great examples? Maybe someone working exclusively on rich-country health or pharma who moved into bio-risk?"

Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!

Things can be 'not the best', but still good. For example, let's say a systematic, well-run, whistleblower organisation was the 'best' way. And compare it to 'telling your friends about a bad org'. 'Telling your friends' is not the best strategy, but it still might be good to do, or worth doing. Saying "telling your friends is not the best way" is consistent with this. Saying "telling your friends is a bad idea" is not consistent with this. 

I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.

2
Linch
6mo
Sorry by "best" I was locally thinking of what's locally best given present limitations, not globally best (which is separately an interesting but less directly relevant discussion). I agree that if there are good actions to do right now, it will be wrong for me to say that all of them are bad because one should wait for (eg) a "systematic, well-run, whistleblower organisation."  For example, if I was saying "GiveDirectly is a bad charity for animal-welfare focused EAs to donate to," I meant that there are better charities on the margin for animal-welfare focused EAs to donate to. I do not mean that in the abstract we should not donate to charities because a well-run international government should be handling public goods provisions and animal welfare restrictions instead. I agree that I should not in most cases be comparing real possibilities against an impossible (or at least heavily impractical) ideal. Similarly, if I said "X is a bad idea for Bob to do," I meant there are better things for Bob to do with Bob's existing limitations etc, not that if Bob should magically overcome all of his present limitations and do Herculeanly impossible tasks. And in fact I was making a claim that there are practical and real possibilities that in my lights are probably better. Well clearly my choice of words on a quickly fired quick take at 1AM was sub-optimal, all things considered. Especially ex post. But I think it'd be helpful if people actually argued about the merits of different strategies instead of making inferences about my racism or lack thereof, or my rudeness or lack thereof. I feel like I'm putting a lot of work in defending fairly anodyne (denotatively) opinions, even if I had a few bad word choices.  After this conversation, I am considering retreating to more legalese and pre-filtering all my public statements for potential controversy by GPT-4, as a friend of mine suggested privately. I suspect this will be a loss for the EA forum being a place where peop

Your top-level post did not claim 'public exposés are not the best strategy', you claimed "public exposés are often a bad idea in EA". That is a different claim, and far from a default view. It is also the view I have been arguing against. I think you've greatly misunderstood others' positions, and have rudely dismissed them rather than trying to understand them. You've ignored the arguments given by others, while not defending your own assertions. So it's frustrating to see you playing the 'I'm being cool-headed and rational here' card. This has been a pretty disappointing negative update for me. Thanks

2
Linch
6mo
Sorry, what does "bad idea" mean to you other than "this is not the best use of resources?" Does it have to mean net negative? I've sorry that you believe I misunderstood other's positions. Or that I'm playing the "I'm being cool and rational here" card. I don't personally think I'm being unusually cool here, if anything this is a pretty unpleasant experience that has made me reconsider whether the EA community is worth continued engagement with. I have made some updates as well, though I need to reflect further on the wisdom of sharing them publicly.

You didn’t provide an alternative, other than the example of you conducting your own private investigation. That option is not open to most, and the beneficial results do not accrue to most. I agree hundreds of hours of work is a cost; that is a pretty banal point. I think we agree that a more systematic solution would be better than relying on a single individual’s decision to put in a lot of work and take on a lot of risk. But you are, blithely in my view, dismissing one of the few responses that have the potential to protect people. Nonlinear have their... (read more)

2
Linch
6mo
Taking a step back, I suspect part of the disagreement here is that I view my position as the default position whereas alternative positions need strong positive arguments for them, whereas (if I understand correctly), you and other commentators/agree-voters appear to believe that the position "public exposes are the best strategy" ought to be the default position and anything else need strong positive arguments for it. Stated that way, I hope you can see why your position is irrational: 1. The burden of proof isn't on me. Very few strategies are the best possible strategy, so "X is a good use of time" has a much higher burden of proof than "X is not a good use of time." 1. Compare "Charity C is probably a good donation target" vs "Charity C is probably not a good donation target." 2. If you didn't think of alternatives before saying public exposés is good, I'm honestly not sure how to react here. I'm kinda flabbergasted at your reaction (and that of people who agree with you). 3. Separately, I did write up alternatives here. Sure, if people agreed with me about the general case and argued that the Nonlinear exposé was an unusual exception, I'd be more inclined to take their arguments seriously. I do think the external source of funding makes it plausible that Nonlinear specifically could not be defanged via other channels. And I did say earlier "I think the case for public writeups are strongest are when the bad actors in question are too powerful for private accountability (eg SBF), or when somehow all other methods are ineffective." People keep asserting this without backing it up with either numbers or data or even actual arguments (rather than just emotional assertions). Thanks for asking. I think a better use of Ben's time (though not necessarily the best use)is to spend .2x as much time on the Nonlinear investigation + followup work and then spend the remaining .8x of his time on other investigations. I think this strictly decreases the influence o

Not everyone is well connected enough to hear rumours. Newcomers and/or less-well-connected people need protection from bad actors too. If someone new to the community was considering an opportunity with Nonlinear, they wouldn't have the same epistemic access as a central and long-standing grant-maker. They could, however, see a public exposé.

8
Linch
6mo
Like Guy Raveh's comment, I think your comment is assuming the conclusion. If it were the case that the only (or best) way to deal with problematic actors in our community is via people learning about them and deciding not to work with them, then I agree that public awareness campaigns is the best strategy. But there are a number of other strategies that does not route completely through everybody voluntarily self-selecting away.

What a fantastic resource, thanks all! Also may be worth adding, the new National Security Commission on Emerging Biotechnology, which will be delivering a 2024 report based on “a thorough review of how advances in emerging biotechnology and related technologies will shape current and future activities of the Department of Defense“ - delivering it to the DoD, White House, and Congress.

Ooh what about Bob Fischer? He's a philosophy professor who ran Rethink's moral weights project and is now on their new Worldview Investigations team! [edit: just saw him suggested in a different comment]

also doing interesting work on market shaping mechanisms, esp for pandemics and climate change!

How come, out of curiousity? I haven't looked into EDCs at all, but on a skim - is it non-neglectedness, weak evidence, both, weak importance, other things?

4
ChrisSmith
6mo
For me, mostly weak evidence. 

Richard Fisher could be an interesting one, author of the recent 'The Long View'

Promote stimulant use can be fine in some cases - eg "have you considered getting an ADHD diagnosis, maybe try mine for a day and see how you feel"


I think this is a bad idea. Suggesting someone 'get a diagnosis' is a terrible approach to health and medical advice. Giving someone your own prescribed medication is also a bad idea, and is exactly the kind of norm-crossing ickiness that should be reduced/eliminated. The version I would endorse is:

"Have you considered whether you might have ADHD? It might be a good idea to talk to a doctor about these issues you're having, as medication can be helpful here."

6
Nathan Young
7mo
I disagree. I think the the general poise of EAs I know is correct - that stimulants are overregulated and that the people who need them most struggle to get them most. I will not condemn those who try to help others short circuit this.  As someone with an ADHD diagnosis, I have not found it remotely trivial to engage with and I wouldn't have minded someone giving me a bit of a push.

Just want to say I appreciate your commentary over the past 9 months. Having someone with legal expertise and (what seems to me) a pretty even-handed and sensible perspective is a really valuable contribution.

Cool! One point from a quick skim - the number of animals wouldn't be lost in many kinds of human extinction events or existential risks. Only a subset would erase the entire biosphere - e.g. a resource-maximising rogue AI, vacuum decay, etc. Presumably with extinction of just humans the animal density of reclaimed land would be higher than current, so the number of animals would rise (assuming it outweighs the end of factory farming). 

The implications of human existential risks for animals is interesting, and I can see some points either way dependin... (read more)

3
Spencer Ericson
8mo
Thanks Ben! I totally agree. The math in this post was trying to get at upper and lower bounds and a median -- but for setting one's personal thresholds, the nuance you mention is incredibly important. I hope this post, and the Desmos tool I linked, can help people play with these numbers and set their own thresholds!

Does anyone else from the UK get a 'unsupported protocol' error from the Asterix site? I do, but it doesn't trigger if I use a VPN.

I love The Mower by Philip Larkin - it captures a deep instinct for kindness, especially towards animals. 

Thanks for this - super interesting! One thing I hadn't caught before is how much the estimates reduce for domain experts in the top quintile for reciprocal scoring - in many cases an order of magnitude lower than that of the main domain expert group!

I think another factor is that HLI's analysis is not just below the level of Givewell, but below  a more basic standard. If HLI had performed at this basic standard, but below Givewell, I think strong criticism would have been unreasonable, as they are still a young and small org with plenty of room to grow. But as it stands the deficiencies are substantial, and a major rethink doesn't appear to be forthcoming, despite being warranted.

2
NickLaing
8mo
Probably a stupid question (probably just missed), can someone point me to where Givewell do a meta-analysis or similar depth of analysis as this HLI one. I can't seem to find it and I would be keen to do a quick compare myself.

I really enjoyed this 2022 paper by Rose Cao ("Multiple realizability and the spirit of functionalism"). A common intuition is that the brain is basically a big network of neurons with input on one side and all-or-nothing output on the other, and the rest of it (glia, metabolism, blood) is mainly keeping that network running. 
The paper's helpful for articulating how that model's impoverished, and argues that the right level for explaining brain activity (and resulting psychological states) might rely on the messy, complex, biological details, such tha... (read more)

As the origin of that comment i should say other reasons for non-convergence are stronger, but the attrition thing contributed. E.g. biases both for experts to over-rate and supers to under-rate. I wonder also about the structure of engagement with strong team identities fomenting tribal stubbornness for both...

6
Damien Laird
9mo
I was also a participant and have my own intuitions from my limited experience. I've had lots of great conversations with people where we both learned new things and updated our beliefs... But I don't know that I've ever had one in an asynchronous comment thread format. Especially given the complexity of the topics, I'm just not sure that format was up to the task. During the whole tournament I found myself wanting to create a Discord server and set up calls to dig deeper into assumptions and disagreements. I totally understand the logistical challenges something like that would impose, as well as making it much harder to analyze the communication between participants, but my biggest open question after the tournament was how much better our outputs could have been with a richer collaboration environment. I asked the original question to try and get at the intuitions of the researchers, having seen all of the data. They outline possible causes and directions for investigation in the paper, which is the right thing to do, but I'm still interested in what they believe happened this time.

Ah okay, thanks for the correction! In which case I think ~all my questions apply to the B10 figure then. 

On the DURC CEA:

  1. Is there any further calculation for B10 - the 'Expected lives lost due to DURC per year over next 50 years'?  I expect this is where a lot of the juice is, and would be a central component for the charity's advocacy work, but I don't see the working (maybe it's in the idea report soon to be published?). The cell formula suggests maybe the assumption is one pandemic killing 80M in the 50 year period (yielding the 1.6M/yr estimate). Is this right?
  2. It looks like 'leakage %' is the risk that a dangerous agent within DURC ends up causing ha
... (read more)
5
MvK
9mo
Hi Ben! With the benefit of hindsight, I realise we could've been more clear on what "leakage" means in this context, given that the topic matter might suggests we are talking about lab leaks. We're not! In our model, lab leak rates would only factor into our estimate of how many deaths will be caused by DURC in the future. Leakage in the CEA refers to the risk that new guidelines in academic communities might be "leaky" in that researchers might choose to migrate to other jurisdictions, countries or privately owned labs (though few of these exist on the BSL levels we are most concerned with) or worse yet, move their research underground. Hence, our CEA discounts the estimate of how many lives could be saved by including this possibility.
Load more