Will Bradshaw

5221Joined Nov 2018


The Nucleic Acid Observatory is looking for a Research Scientist to co-lead research and development of wastewater monitoring laboratory methods, as part of our general goal of achieving reliable early warning of catastrophic biological threats.

We're looking for someone with deep wet-lab expertise, who's excited about applying that expertise to a key problem in biosecurity. Experience in virology, microbiology or wastewater work is ideal but not required. High-performing hires will have the opportunity to grow into a leadership role within the NAO, helping to build a healthy and productive team and determine the long-term direction of the NAO project.

For more information, visit the MIT website here or reach out to me on the Forum.

It's very much not obvious to me that EAs should generally prefer progressive democratic candidates in general, or Salinas in particular.

Speaking personally, I am generally not excited about Democratic progressives gaining more power in the party relative to centrists, and I'm pretty confident I'm not alone here in that[1]

I also think it's false to claim that Salinas's platform as linked gives much reason to think she will be a force for good on global poverty, animal welfare, or meaningful voting reform. (I'd obviously change my mind on this if there are other Salinas quotes that pertain more directly to these issues.)

There are also various parts of her platform that make me think there's a decent chance that her time in office will turn out to be bad for the world by my lights (not just relative to Carrick). I obviously don't expect everyone here to agree with me on that, and I'm certainly not confident about it, but I also don't want broad claims that progressives are better by EA values to stand uncontested, because I personally don't think that's true.

  1. ^

    To be clear, I think this is very contestable within an EA framework, and am not trying to claim that my political preferences here are necessarily implied by EA.

I keep going back and forth on this.

My first reaction was "this is just basic best practice for any people-/relationship-focused role, obviously community builders should have CRMs".

Then I realised none of the leaders of the student group I was most active in had CRMs (to my knowledge) and I would have been maybe a bit creeped out if they had, which updated me in the other direction.

Then I thought about it more and realised that group was very far in the direction of "friends with a common interest hang out", and that for student groups that were less like that I'm still basically pro CRMs. This feels obviously true for "advocacy" groups (anything explicitly religious or political, but also e.g. environmentalist groups, sustainability groups, help-your-local-community groups, anything do-goody). But I think I'd be in favour of even relatively neutral groups (e.g. student science club, student orchestras, etc) doing this.

Given how hard it is to keep any student group alive across multiple generations of leadership, not having a CRM is starting to seem very foolhardy to me.

I feel like  many (most?) of the "-ist"-like descriptors that apply to me are dependent on empirical/refutable claims. For example, I'm also an atheist -- but that view would potentially be quite easy to disprove with the right evidence.

Indeed, I think it's just very common for people who hold a certain empirical view to be called an X-ist. Maybe that's somewhat bad for their epistemics, but I think this piece goes too far in suggesting that I have to think something is "for-sure-correct" before "-ist" descriptors apply.

Separately, I think identifying as a rationalist and effective altruist is good for my epistemics. Part of being a good EA is having careful epistemics, updating based on evidence, being impartially compassionate, etc. Part of being a rationalist is to be aware of and willing to correct for my cognitive biases. When someone challenges me on one of these points, my professed identity gives me a strong incentive to behave well that I wouldn't otherwise have. (To be fair, I'm less sure this applies to "longtermist", which I think has much less pro-epistemic baggage than EA.)

Occasionally, apparent coldness to immediate suffering:  I've only seen this a bit, but even one example could be enough to put someone off for good.

I would really like to ban the term "rounding error".

  1. Strong messaging to the effect of "we need talent" gives the impression that there are enough jobs that if you are reasonably skilled, you can get a job.
  2. Strong messaging to the effect of "we need founders", or "just apply for funding" gives the impression that you will get funding.

I'm a bit confused, because this doesn't seem to match the scenario described in the OP that you quoted. My summary of that scenario would be:

  1. An EA org paid the OP to work for them as a contractor;
  2. The org  then invited them to apply for an open position for a similar role;
  3. They didn't get the position (presumably because the org found another candidate they thought would be better?).

I have a lot of sympathy for the OP in this scenario, and expect it was a very painful and disheartening experience. I definitely cringe a bit when I read it. But it doesn't seem to me like anyone did anything wrong here -- it just seems like the kind of unfortunate-but-unavoidable situation that comes up all the time in the workplace. But you're saying this is "harmful" and that orgs need to "do a lot better", which suggests that you disagree?

I really hope that orgs can do a lot better on this, because I think this and similar things are pretty harmful.

Can you elaborate on what part of this you think is harmful, and what would be better?

Thanks, Owen! I do feel quite conflicted about my feelings here, appreciate your engagement. :)

I do claim that it's good to have articulations of things like this even if the case is reasonably well known

Yeah, I agree with this -- ultimately it's on those of us more on the pro-immortality side to make the case more strongly, and having solid articulations of both sides is valuable. Also flagging that this...

Would I eventually like to move to a post-death world? Probably, but I'm not certain. For one thing I think quite likely the concept of "death" will not carve reality at its joins so cleanly in the future.

...seems roughly right to me.

(This has not applied evenly. People who were already planning to make EA central to their career are generally experiencing EA as less demanding: pay in EA organizations has gone up, there is less stress around fundraising, and there is less of a focus on frugality or other forms of personal sacrifice. In some cases these changes mean that if someone does decide to shift their career it is less of a sacrifice than it would've been, though that does depend on how the field you enter is funded.)

Thanks, I found this discussion of in what ways EA is now more vs less demanding quite clarifying. I appreciate the point that for some people EA is much less demanding than it used to be, while for others it's much more so.

I felt quite frustrated by this post, because the preponderance of EA discourse is already quite sceptical of anti-ageing interventions (as you can tell by the fact that no major EA funder is putting significant resources into it). I would in fact claim that the amount of time and ink spent within EA in discussing reasons not to support anti-ageing interventions significantly exceeds that spent on the pro side.

So this post is repeating well-covered arguments, and strengthening the perception that "EAs don't do longevity", while claiming to be promoting an under-represented point of view.

Load More