M

mlsbt

122 karmaJoined Aug 2021

Bio

*

Posts
1

Sorted by New
49
mlsbt
· 3y ago · 11m read

Comments
25

By the most recent World Bank and FAO data, as well as the 2017 FAO data you link to, Greece isn't close to being the largest producer of fish in the EU nor the 15th largest producer in the world. Correct me if I'm wrong, I think the correct claim is that Greece farms the greatest number of fish in the EU. Fish production statistics are generally by total weight rather than fish number, and I see how the latter is more relevant to welfare concerns. However I think your phrasing is a bit misleading, as Greece has a very unique fish industry for the EU. It farms a huge amount of low-weight fish and has a relatively small wild-catch industry. For most (all?) other European countries, total national fish catch (by weight and number) is still dominated by fishing fleet capture rather than aquaculture. I'd be curious to know how your model weights welfare impacts on humane slaughter method adoption vs improving living conditions on farms. If the latter is a bigger deal, I see how Greece can be a high-leverage country to start with, especially considering the growing proportion of aquaculture in fish production worldwide.

Great post! Quick note: clicking on the carets takes me to that same section rather than the longer intervention descriptions under 'List of prioritized interventions'.

In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain.

You're right, I misrepresented your point here. This doesn't affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.

Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.

I stand by my claim that 'loving non-kin' is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there's variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.

Considering your own argument, I don't see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners' dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I'm all for that, but ultimately my own altruism values people's welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it's just raw unexplained intuitions, then I'm not sure we should put much stock in them.)

I'm not explaining myself well. What I'm trying to say is that the symmetry between dividing and multiplying is superficial - both are consistent, but one also fulfills a deep human value (which I'm trying to argue for with the utopia example), whereas the other ethically 'allows' the circumvention of this value. I'm not saying that this  value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good - in that we agree.

I think most people would choose S because brain modification is weird and scary. This an intuition that's irrelevant to the purpose of the hypothetical but is strong enough to make the whole scenario less helpful. I'm very confident that ~0/100 people would choose D, which is what you're arguing for! Furthermore, if you added a weaker M that changed your emotions so that you simply care much more about random strangers than you currently do, I think many (if not most) people - especially among EAs - would choose that. Doubly so for idealized versions of themselves, the people they want to be making the choice. So again, you are arguing for quite strange intuitions, and I think the brain modification scenario reinforces rather than undermines that claim.

To your second point, we're lucky that EA cause areas are not prisoner's dilemmas! Everyday acts of altruism aren't prisoner's dilemmas either. By arguing that most people's imagined inhabitants of utopia 'shut up and multiply' rather than divide, I'm just saying that these utopians care *a lot* about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it. Introducing the dynamics of an adversarial game to this broad truth is a disanalogy.

When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.

“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument that would really shake the foundations of EA.

Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?

So actually we have three choices: divide, multiply, or be scope insensitive. In an ideal world populated by good and rational people, they’d probably still care relatively more about their families, but no one will be indifferent to the suffering of the far away. Loving and empathizing with strangers is widely agreed to be a vital and beautiful part of what makes us human, despite our imperfections. The fact that we have this particular cognitive bias of scope insensitivity may be fundamentally human in some sense, but it’s not really part of what makes us human. Nobody’s calling scope sensitive people sociopaths. Nobody’s personal idea of utopia elevates this principle of scope insensitivity to the level of ‘love others’.

Likewise, very few would prefer/imagine this idealized world as filled with ‘divide’ people rather than ‘multiply’ people. Because:

The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought.

Most people’s imagined inhabitants of utopia fit the former profile much more closely. So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many. To really attack this foundation you’d have to argue for why these common intuitions about good and bad are wrong, not just that they’re ripe for inconsistencies when held by normal humans (which every set of ethical principles is).

I’m using ‘friend group’ as something like a relatively small community with tight social ties and large and diverse set of semi-reliable identifiers.

EA attracts people who want to do large amounts of good. Weighted by engagement, the EA community is made up of people for whom this initial interest in EA was reinforced socially or financially, often both. Many EAs believe that AI alignment is an extremely difficult technical problem, on the scale of questions motivating major research programs in math and physics. My claim is that such a problem won’t be directly solved by this relatively tiny subset of technically-inclined do-gooders, nice people who like meet-ups and have suspiciously convergent interests outside of AI stuff.

EA is a friend group, algebraic geometers are not. Importantly, even if you don’t believe alignment is that difficult, we’d still solve it more quickly without tacking on this whole social framework. It worries me that alignment research isn’t catching on in mainstream academia (like climate change did); this seems to indicate that some factor in the post above (like groupthink) is preventing EAs from either constructing a widely compelling argument for AI safety, or making it compelling for outsiders who aren’t into the whole EA thing.

Basically we shouldn’t tie causes unnecessarily to the EA community - which is a great community - unless we have a really good reason.

This type of piece is what the Criticism contest was designed for, and I hope it gets a lot of attention and discussion. EA should have the courage of its convictions; global poverty and AI alignment aren't going to be solved by a friend group, let alone the same friend group.

I think the wording of your options is a bit misleading. It's valuable to publish your criticism of any topic that's taking up non-trivial EA resources, regardless of its true worth as a topic - otherwise we might be wasting bednets money. The important question is whether or not infinite ethics fits this category (I'm unsure, but my best guess is no right now and maybe yes in a few years). Whether or not something is a "serious problem" or "deserves criticism", at least for me, seems to point to a substantively different claim. More like, "I agree/disagree with the people who think infinite ethics is a valuable research field". That's not the relevant question.

That makes sense! I was interpreting your post and comment as a bit more categorical than was probably intended. Looking forward to your post.

Load more