S

Spiracular

96 karmaJoined
0

Comments
5

Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.

This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?

That said, I don't think "callout / inventory of grievances / complaints" and "nuanced post about how to run an org better/fix the errors of your ways" always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.

I feel like for the next... week to month... I view it as primarily Nonlinear's ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards "how to run things better."

Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: "We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B..."

I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it's a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.

If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.

(If they do a bad job at this, or even just if they seem to have "missed a spot": I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)

((If Nonlinear fails to do this at all: Maybe it does fall to other people to... "digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?" IDK. Personally, I'd like to see what they come up with first.))

...I do currently think the primary take-away that "this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position" looks quite solid. In the absence of a high-level "Dialogue in the Comments: Meta Summary Post" comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.

I want to point out that the existence of a libel law that is expensive to engage with, does practically nothing against the posting of anonymized callout posts. You can't sue someone you can't identify.

Love it or hate it: the more harshly libel law is enforced, the more I expect similar things to be handled through fully-anonymous or low-transparency channels, instead of high-transparency ones. And in aggregate, I expect an environment high on libel suits, to disincentivize transparent behavior or highly specific allegations (which risks de-anonymization) on the part of accusers, more strongly than it incentivizes epistemic carefulness.

This is one reason to be against encouraging highly litigious attitudes, that I haven't yet seen mentioned, so I thought I'd briefly put it out there.

Thanks! For me, this does a bit to clear up why buy-in is perceived as such a key bottleneck.

(And secondarily, supporting the idea that other areas of fairly-high ROI are likely to be centered around facilitating collaboration and consolidation of resources among people with a lot of pre-existing experience/expertise/buy-in.)

Thanks for the thoughtful response!

I want to start with the recognition that everything I remember hearing from you in particular around this topic, here and elsewhere, has been extremely reasonable. I also very much liked your paper.

My experience has been that I have had multiple discussions around disease shut down prematurely in some in-person EA spaces, or else turned into extended discussions of infohazards, even if I'm careful. At some point, it started to feel more like a meme than anything. There are some cases where "infohazards" were brought up as a good, genuine, relevant concern, but I also think there are a lot of EAs and rationalists who seem to have a better grasp of the infohazard meme than they do of anything topical in this space. Some of the sentiment you're pointing to is largely a response to that, and it was one of the motivations for writing a post focused on clear heuristics and guidelines. I suspect this sort of thing happening repeatedly comes with its own kind of reputational risk, which could stand to see some level of critical examination.

I think there are good reasons for the apparent consensus you present that particularly effective EA Biorisk work requires extraordinarily credentialed people.* You did a good job of presenting that here. The extent to which political sensitivity and the delicate art of reputation-management plays into this, is something I was partially aware of, but had perhaps under-weighted. I appreciate you spelling it out.

The military seems to have every reason to adopt discretion as a default. There's also a certain tendency of the media and general public to freak out in actively damaging directions around topics like epidemiology, which might feed somewhat into a need for reputation-management-related discretion in those areas as well. The response to an epidemic seems to have a huge, and sometimes negative, impact on how a disease progresses, so a certain level of caution in these fields seems pretty warranted.

I want to quickly note that I tend to be relatively-unconvinced that mature and bureaucratic hierarchies are evidence of a field being covered competently. But I would update considerably in your direction if your experience agrees with something like the following:

Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?

And if not, what typically seems to have gone wrong? Is there a step that usually falls apart?

(Here are some possible bottlenecks I could think of, and I'm curious if one of them sounds more right to you than the others: Is it hard to search for what's already been done, to the point that there are dozens of redundant projects? Is it a case of there being too much to do, and each project is a rather large undertaking? (a million good ideas, each of which would take 10 years to test) Does it seem to be too challenging for people to find some particular kind of collaborator? A resource inadequacy? Is the field riddled with untrustworthy contributions, just waiting for a replication crisis? (that would certainly do a lot to justify the unease and skepticism about newcomers that you described above) Does it mostly look like good ideas tend to die a bureaucratic death? Or does it seem as if structurally, it's almost impossible for people to remain motivated by the right things? Or is the field just... noisy, for lack of a better word for it. Hard to measure for real effect or success.)

*It does alienate me, personally. I try very hard to stand as a counterargument to "credentialism-required"; someone who tries to get mileage out of engaging with conversations and small biorisk-related interventions as a high-time-investment hobby on the side of an analysis career. Officially, all I'm backed up with on this is a biology-related BS degree, a lot of thought, enthusiasm, and a tiny dash of motivating spite. If there wasn't at least a piece of me fighting against some of the strong-interpretation implications of this conclusion, this post would never have been written. But I do recognize some level of validity to the reasoning.

Now that we've gone over some of the considerations, here's some of the concrete topics I see as generally high or low hazard for open discussion.

Good for Open Discussion

  • Broad-application antiviral developments and methods
    • Vaccines
    • Antivirals proper
    • T-cell therapy
    • Virus detection and monitoring
  • How to report lab hazards
    • ...and how to normalize and encourage this
  • Broadly-applicable protective measures
    • Sanitation
    • Bunkers?
  • The state of funding
  • The state of talent
    • What broad skills to develop
    • How to appeal to talent
    • Who talent should talk to

Bad for Open Discussion

These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.

  • Disease delivery methods
  • Specific Threats
  • Specific Exploitable Flaws in Defense Systems
    • Ex: immune systems, hospital monitoring systems
    • It is especially bad to mention them if they are exploitable reliably
    • If you are simultaneously providing a comprehensive solution to the problem, this can become more of a gray-area. Partial-solutions, or challenging-to-implement solutions, are likely to fall on the bad side of this equation.
  • Much of the synthetic biology surrounding this topic
  • Arguments for and against various agents using disease as an M.O.