MichaelStJules

Associate researcher, animal welfare @ Rethink Priorities
Working (0-5 years experience)
6402Joined May 2016

Bio

Associate researcher in animal welfare at Rethink Priorities. Writing on behalf of myself only.

Also interested in global priorities research and reducing s-risks.

My background is mostly in pure math, computer science and deep learning, but also some in statistics/econometrics and agricultural economics.

I'm a suffering-focused moral antirealist.

My shortform.

Comments
1583

Topic Contributions
9

Maybe Derek Parfit (vegetarian), Chris Olah (vegan), Mark Xu (vegan until this year https://markxu.com/transitioning-vegan), Rohin Shah (~vegan) are other examples?

I think there's been some impressive technical work out of GPI, and generally in population ethics and decision theory, and I have specific authors in mind, but I can't tell if they've been vegetarian through Google. If you're really invested in this, I can share names and papers, and you can ask them directly if they've been veg.

I'd say people working in population ethics are reasonably likely to be veg.

Do you have specific works by EAs or EA-adjacent people involving "divergent, multi-stage, needle in a haystack breakthroughs" in mind? And are multi-stage (sequentially dependant?) breakthroughs more impressive than a similar number of breakthroughs that aren’t sequentially dependant or that happen far apart in time from each other? Or are you thinking of something where a single breakthrough isn't enough on its own for useful or interesting conclusions, and more are needed until something valuable can be produced produced?

What do you mean by Peter Singer isn't formally linked with EA?

I also see other philosophers who identify with EA or are otherwise involved with EA, e.g. have worked at or with Rethink Priorities. But maybe you're thinking more prominent EA philosophers like MacAskill, Ord, Greaves and Bostrom?

Do you think working to reduce s-risks instead of extinction risks is compatible with the arguments they make? That would still count as longtermist.

Thanks for writing this!

Another possible response is that ahead of time, each possible contingent individual may have an extraordinarily weak claim against you for possible harms to them, because they almost certainly won't exist. But I'd guess this isn't enough to capture the ex ante badness of bringing into existence an unknown individual who will probably have a bad life (e.g. factory farmed animals), so one of your other options or something else seems necessary anyway. Also, it may lead to some pretty odd dynamic inconsistency or other seemingly irrational behaviour like trying to avoid finding out who will be harmed in cases of many individuals at small individual risk of harm but large collective risk of at least one individual being harmed.

You could check https://en.m.wikipedia.org/wiki/List_of_vegans and https://en.m.wikipedia.org/wiki/List_of_vegetarians

I see Brian Greene (theoretical physicist), Douglas Hofstadter (cognitive scientist and physicist), George Church (geneticist) and Christine Korsgaard (philosopher) on the list of vegans, although you could check if they were producing good work while vegan. I haven't checked the list of vegetarians, but there are probably plenty of famous examples there. Edward Witten (Fields Medalist mathematical/theoretical physicist) is/was vegetarian.

There are also plenty of famous vegan artists and entrepreneurs, and their work often takes originality or novelty.

Is your standard "genius-level", or even "genius-level technical work"? Or being highly productive in intellectual work? I think plenty of philosophers and EAs who have done good research, including at prominent EA orgs, have been vegan or at least vegetarian and will probably meet the last standard.

Some who (I think) have been veg (not sure if vegan specifically or if they're still veg): Peter Singer (~vegan), Will MacAskill, Brian Tomasik (lacto-veg), multiple research staff at Rethink Priorities (where I work) even outside animal welfare, Sam Bankman-Fried (vegan), Rob Wiblin and Howie Lempel (and others?) at 80,000 Hours, I'd guess some research staff at Open Phil and not just those focused on animal welfare. Vegetarianism and veganism are very common in EA (https://rethinkpriorities.org/publications/eas2019-community-demographics-characteristics), so we shouldn't be surprised to find good examples (but not necessarily geniuses), and if we didn't, that could be a bad sign.

There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There's a treatment of decision theory here that derives "boundedness" (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:

https://onlinelibrary.wiley.com/doi/pdf/10.1111/phpr.12704

I haven't come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?

We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.

Assuming illusionism is true, then yes, I think only those with illusions of consciousness are moral patients.

Also see section 6.2 in https://www.openphilanthropy.org/research/2017-report-on-consciousness-and-moral-patienthood for discussion of some specific theories and that they don't answer the hard problem.

Load More