Jim Buhler

Community Manager @ EA Cambridge
425 karmaJoined Sep 2020Working (0-5 years)London, UK

Bio

Participation
4

I am the main organizer of Effective Altruism Cambridge (UK), a group of people who are thinking hard about how to help others the most and address the world’s most pressing problems through their careers.

Previously, I worked in organizations such as EA France (community director), Existential Risk Alliance (research fellow), and the Center on Long-Term Risk (events and community associate). 

I've conducted research on various longtermist topics (some of it posted here on the EA Forum) and recently finished a Master's in moral philosophy.

I've also written some stuff on LessWrong.

You can give me anonymous feedback here. :)

Sequences
1

What values will control the Future?

Comments
45

Topic contributions
4

Interesting, thanks for sharing your thoughts on the process and stuff! (And happy to see the post published!) :)

Interesting, makes sense! Thanks for the clarification and for your thoughts on this! :)

 If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.

But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).

This makes me realize that the crux is perhaps this below part more than the claim we discuss above. 



While I tentatively think the the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.

Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.

I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.

Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and don't imply. :)

I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)

I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming it'll be positive is unsupported.

There are many other arguments/considerations to take into account to assess the sign of the future.

Thanks!

Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation?

Hum... not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.


"One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves".

Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I'm assailing in this post.

Thanks Vasco! Perhaps a nitpick but suffering still doesn't seem to be the limiting factor per se, here. If farmed animals were philosophical zombies (i.e., were not sentient but still had the exact same needs), that wouldn't change the fact that one needs to keep them in conditions that are ok enough to be able to make a profit out of them. The limiting factor is their physical needs, not their suffering itself. Do you agree?

I think the distinction is important because it suggests that suffering itself appears as a limiting factor only insofar as it is strong evidence of physical needs that are not met. And while both strongly correlate in the present, I argue that we should expect this to change.

Interesting, thanks Ben! I definitely agree that this is the crux. 

I'm sympathetic to the claim that "this algorithm would be less efficient than quicksort" and that this claim is generalizable.[1] However, if true, I think it only implies that suffering is -- by default -- inefficient as a motivation for an algorithm

Right after making my crux claim, I reference some of Tobias Baumann's (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/required in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons). Interestingly, his "incidental suffering" examples are more similar to the factory farming and human slavery examples than to the Quicksort example.

  1. ^

    To be fair, it's been a while since I've read about stuff like suffering subroutines (see, e.g., Tomasik 2019) and its plausibility, and people might have raised considerations going against that claim.

Thanks, Maxime! This is indeed a relevant consideration I thought a tiny bit about, and Michael St. Jules also brought that up in a comment on my draft.

First of all, it is important to note that UCC affects the neglectedness -- and potentially also the probability -- of "late s-risks", only (i.e., those that happen far away enough from now for the UCC selection to actually have the time to occur). So let's consider only these late s-risks.

We might want to differentiate between three different cases:
1. Extreme UCC (where suffering is not just ignored but ends up being valued as in the scenario I depict in this footnote. In this case, all kinds of late s-risks seem not only more neglected but also more likely.
2. Strong UCC (where agents end up being roughly indifferent to suffering; this is the case your comment assumes I think). In this case, while all kinds of late s-risks seem more neglected, late s-risks from conflict seem indeed less likely. However, this doesn't seem to apply to (at least) near-misses and incidental risks.
3. Weak UCC (where agents still care about suffering but much less than we do). In this case, same as above, except perhaps for the "late s-risks from conflict" part. I don't know how weak UCC would change conflict dynamics.

The more we expect #2 more than #1 and #3, the more your point applies, I think (with the above caveat on near-misses and incidental risks). I might definitely have missed something, though. It's a bit complicated.

Thanks for the comment!

Right now, in rich countries, we seem to live in an unusual period Robin Hanson (2009) calls "the Dream Time". You can survive valuing pretty much whatever you want, which is why there isn't much selection pressure on values. This likely won't go on forever, especially if Humanity starts colonizing space.

(Re religion. This is anecdotical but since you brought up this example: in the past, I think religious people would have been much less successful at spreading their values if they were more concerned about the suffering of the people they were trying to convert. The growth of religion was far from being a harm-free process.)

Load more