Jeff Kaufman

Software Engineer @ Nucleic Acid Observatory
11317 karmaJoined Aug 2014Working (15+ years)Somerville, MA, USA
www.jefftk.com

Bio

Participation
4

Software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise.

Full list of EA posts: jefftk.com/news/ea

Comments
717

This sort of work is very sensitive to your choices for moral weights, and while I do appreciate you showing your input weights clearly in a table I think it's worth emphasizing up front how unusual they are. For example, I'd predict an overwhelming majority of humans would rather see an extra year of good life for one human than four chickens, twelve carp, or thirty three shrimp. And, eyeballing your calculations, if you used more conventional moral weights your bottom-line conclusion would be that net global welfare was positive and increasing.

That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when you're deciding things likely to have the most impact, like what your career should be in.

This post is a good example of the risks of tying yourself in knots with consequentialist reasoning. There are a lot of potential consequences of leaving a review beyond "it makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant instead". You get into this some, but three plausible effects of artificially inflated reviews would be:

  • Non-vegans looking for high-quality food go to the restaurant, get vegan food, think "even highly rated vegan food is terrible", don't become vegan.

  • Actually good vegan restaurants have trouble distinguishing themselves, because "helpful" vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food don't work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.

  • People notice this and think "if vegans are lying to us about how good the food is, are they also lying to us about the health impacts?" Overall trust in vegans (and utilitarians) decreases.

We need a morality for human beings with limited ability to know the impacts of their actions, and reasoning through the full impact of every decision is not possible. You'll generally do a lot more to make the world better if you take a more "rule utilitarian" approach, at least in low stakes situations like restaurant reviewing. Promoting truth and accurate information is almost always the right thing to do.

[EDIT: expanded this into a post]

  • To work in AI instead of other areas (higher salary, topic is shiny)

(Disclosure: I decided to work in biorisk and not AI)

Biosecurity is a well-established field outside of EA, and there are many excellent upskilling opportunities outside the movement (e.g. pursuing George Mason Global Biodefense Masters, joining professional societies like ABSA, engaging with the UN and WHO)

While there are people in the broader biosecurity field doing good work, my impression is this is the exception. There's a ton of work done without a threat model or with what I (and I think most people who thought about it for a bit from an EA perspective) would say is a threat model that neglects the ways the world has been changing and is likely to continue to change. I don't see EAs preferring to join EA biosecurity groups over other groups in the biosecurity field as something that commonly puts them in less impactful roles.

Julia W's writing on on her considerations, not going to search it up but it was good

The community health team’s work on interpersonal harm in the community

private response they were envisaging for Ben + Lightcone

Thanks for pointing this out! I had the impression they wanted time to prepare a public response that could go live contemporaneously with Ben's post, but reading the comments from Kat and Emerson it looks like you're right!

Notably, it's now been about twice as long as Nonlinear says they originally requested Ben to give them to prepare their side of the story (a week).

I was responding specifically to the claim that hearing that a restraining order has been granted is very informative. I didn't claim that getting one is easy or hard, or that the community health team should have higher or lower thresholds for action.

I'm also not trying to say anything either way about the community builder in question, and don't know any more about that situation than I've read in this thread. And specifically, I'm not saying that they are mentally ill or made a report based on hallucinations. Instead, what I'm saying is that because the decision to grant a restraining order is not the product of an investigative process and the amount of evidence necessary is relatively low, learning that one has been granted doesn't provide much evidence.

I don't know anything about the case above, but I don't actually think it is that strong evidence? About a decade ago, our landlord got a harassment prevention restraining order issued against one of our housemates. The problem was, our landlord was schizophrenic (and unmedicated) and everything they wrote to the judge was hallucinated. My impression is that, at least in Massachusetts, the justice system has a relatively low bar for issuing these?

(In a follow-up, we were able to all get reciprocal orders put in place)

(Disclosure: married to a CH team member)

Load more