Tristan Cook

Maths master's student at Cambridge. Interested in s-risks, global priorities research and community building.

Posts

Sorted by New

Comments

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Thanks for the post!

I'd recommend Daniel Kestenholz's energy log post  for a system and template for tracking energy throughout the day. 

Practical ethics given moral uncertainty

From 1. "the same ballpark as murder" the Internet Archive has it saved here
The link in 3 "in the same ballpark as walking past a child drowning in a shallow pond" is also dead, but  is in the Internet archive here

Edit: the link in 2 is also archived here

Linch's Shortform

Not 128kb (Slack resized it for me) but this worked for me

Retrospective on Catalyst, a 100-person biosecurity summit

Both links to Catalyst are broken (I think they're missing https://)

Exploring a Logarithmic Tolerance of Suffering

I really liked this post and made me think! Here are some stray thoughts which I'm not super confident in:

  • Something similar to  Linear Tolerance and No Significant Tolerance are called negative-leaning utilitarianism  (or weak negative utilitarianism) and lexical-threshold negative utilitarianism (see here or here)
  • It seems like logarithmic trade-offs are just linear tolerance where we've scaled (exponentially) all original suffering values  . I'm not sure if it's just easier just to think the suffering values were already this  value and then use linear tolerance?
  • I'm confused by your use of  and  for amounts of suffering and happiness for an individual. I'm guessing you're also factoring in intensity?
MaxG's Shortform

The blogger gwern has many posts on self-experiments  here.

Tristan Cook's Shortform

Thanks for such a detailed and insightful response Gregory.

Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.

Thanks for pointing this out. I think I realised this extra bullet biting after making the post.

There's also the worry in a pairwise comparison one might inadvertently pick a counterexample for one 'side' that turns the screws less than the counterexample for the other one. 

This makes a lot of sense, and not something I’d considered at all and seems pretty important when playing counterxample-intuition-tennis.

By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.

Again, this feels really useful and something I want to think about further.

The typical worry of the (absolute) negative view itself is it fails to price happiness at all - yet often we're inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of 'upside'.

I think my slight negative  intuition comes from that fact that although I may be willing to endure some suffering for some upside, I wouldn’t endorse inflicting suffering (or risk or suffering) on person A for some upside for person B.  I don't know how much work the differences of fairness personal identity (i.e. the being that suffered gets the upside) between the examples are doing, and it what direction my intuition is 'less' biased.

Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC - by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia - Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas

I like this example a lot! and  definitely lean A > Z. 

Reframing the situation, and my intuition  becomes less clear: considering  A’, in which TREE(TREE(3))) lives are in perfect bliss, but there are also TREE(TREE(3))) beings that monetarily experience a single pinprick before ceasing to  to exist. This is clearly equivalent to A in the axiology but  my intuition is less clear (if at all) that A’ > Z. As above, I’m unsure how much work personal identity is doing. In my mind, I find population ethics easier to think about by considering ‘experienced moments’ rather than individuals. 

(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) ...

Thanks for pointing  out the error. I think think I’m right in saying that the ‘welfare capped by 0’ axiology is non-anti-egalitarian, which I conflated with absolute NU in my post (which is anti-egalitarian as you say). The axiologies are much more distinct than I originally thought.

Tristan Cook's Shortform

Suppose you think only suffering counts* (absolute negative utilitarian), then the 'negative totalism' population axiology seems pretty reasonable to me.

The axiology does entail the 'Omela Conclusion' (OC), an analogue of the Repugnant Conclusion (RC), which states that for any state of affairs there is a better state in which a single life is hellish and everyone else's life is free from suffering. As a form of totalism, the axiology does not lead to an analogue of the sadistic conclusion  and is non-anti-egalitarian.

The OC (supposing absolute negative utilitarianism) seems more palatable to me than the RC (supposing classical utilitarianism). I'm curious to what extent, if at all, this intuition is shared.

Further, given a (debatable) meta-intuition for robustness of one's ethical theory, does such a preference suggest one should update slightly towards absolute negative utilitarianism or vice versa?

*or that individual utility is bounded above by 0

Open and Welcome Thread: March 2021

Hello! I'm a maths master's student at Cambridge and have been involved with student groups for the last few years. I've been lurking on the forum for a long time and want to become more active. Hopefully this is the first comment of many!