All of mlsbt's Comments + Replies

By the most recent World Bank and FAO data, as well as the 2017 FAO data you link to, Greece isn't close to being the largest producer of fish in the EU nor the 15th largest producer in the world. Correct me if I'm wrong, I think the correct claim is that Greece farms the greatest number of fish in the EU. Fish production statistics are generally by total weight rather than fish number, and I see how the latter is more relevant to welfare concerns. However I think your phrasing is a bit misleading, as Greece has a very unique fish industry for the EU. It f... (read more)

8
vicky_cox
8mo
Hi – thanks for your comment.   You're correct in that we are prioritizing for the greatest number of fish on farms rather than the total tonnage produced. Our claim that Greece is the largest producer in the EU (and the 15th largest in the world) is based on estimates (using the FAO data linked) of the number of fish alive on farms in each country at any given time. This has to do with the total farmed as well as their lifespans and expected mortality rates on farms pre-slaughter.    We generally prioritize farmed fish as you can impact their whole lives, whereas for wild-caught fish, you can only impact the end of their lives. However, if the new charity chooses to focus on humane slaughter, then there could be scope to focus on both farmed and wild-caught fish. Note, however, that protections for wild-caught fish, to the best of my knowledge, don't exist in legislation anywhere yet, so we might not expect this to be very tractable. Overall, at least in the short-term, we would recommend that a new org should focus on farmed fish and should try to push for the inclusion of stocking density and water quality parameters as well as humane slaughter in any ask where possible. Still, we could see that the new org chooses to only focus on slaughter – at least at first – as this seems to be what the rest of the movement is likely to focus on, and there could be benefits from all being on the same page and asking for the same thing.

Great post! Quick note: clicking on the carets takes me to that same section rather than the longer intervention descriptions under 'List of prioritized interventions'.

1
taoburga
1y
Thanks for the feedback! I don’t have access to my computer right now but I’ll try to fix that when I do.

In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain.

You're right, I misrepresented your point here. This doesn't affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.

Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.

I stan... (read more)

I think most people would choose S because brain modification is weird and scary. This an intuition that's irrelevant to the purpose of the hypothetical but is strong enough to make the whole scenario less helpful. I'm very confident that ~0/100 people would choose D, which is what you're arguing for! Furthermore, if you added a weaker M that changed your emotions so that you simply care much more about random strangers than you currently do, I think many (if not most) people - especially among EAs - would choose that. Doubly so for idealized versions of t... (read more)

2
Wei Dai
2y
In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain. Ok, I was confused because I wasn't expecting how you're using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer's. Considering your own argument, I don't see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners' dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I'm all for that, but ultimately my own altruism values people's welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it's just raw unexplained intuitions, then I'm not sure we should put much stock in them.) Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.

When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.

“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument... (read more)

3
Wei Dai
2y
Suppose I invented a brain modification machine and asked 100 random people to choose between: * M(ultiply): change your emotions so that you care much more in aggregate about humanity than your friends, family, and self * D(ivide): change your emotions so that you care much less about random strangers that you happen to come across than you currently do * S(cope insensitive): don't change anything. Would most of them "intuitively" really choose M? From this, it seems that you're approaching the question differently, analogous to asking someone if they would modify everyone's brain so that everyone cares much more in aggregate about humanity (thereby establishing this utopia). But this is like the difference between unilaterally playing Cooperate in Prisoners' Dilemma, versus somehow forcing both players to play Cooperate. Asking EAs or potential EAs to care much more about humanity than they used to, and not conditional on everyone else doing the same, based on your argument, is like asking someone to unilaterally play Cooperate, while using the argument, "Wouldn't you like to live in a utopia where everyone plays Cooperate?"
1
Daniel Kirmani
2y
This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can't especially care about individual people while simultaneously caring about everyone equally. You just can't. "Logically consistent" means that you don't claim to do both of these mutually exclusive things at once.

I’m using ‘friend group’ as something like a relatively small community with tight social ties and large and diverse set of semi-reliable identifiers.

EA attracts people who want to do large amounts of good. Weighted by engagement, the EA community is made up of people for whom this initial interest in EA was reinforced socially or financially, often both. Many EAs believe that AI alignment is an extremely difficult technical problem, on the scale of questions motivating major research programs in math and physics. My claim is that such a problem won’t be d... (read more)

This type of piece is what the Criticism contest was designed for, and I hope it gets a lot of attention and discussion. EA should have the courage of its convictions; global poverty and AI alignment aren't going to be solved by a friend group, let alone the same friend group.

1
Coafos
2y
Could you describe in other words what you mean by "friend group"? While a group formed around hiking, tabletop games or some fanfic may not solve AI (ok the fanfic part might), but friends with a common interest in ships and trains probably have an above-average shot at solving global logistic problems.

I think the wording of your options is a bit misleading. It's valuable to publish your criticism of any topic that's taking up non-trivial EA resources, regardless of its true worth as a topic - otherwise we might be wasting bednets money. The important question is whether or not infinite ethics fits this category (I'm unsure, but my best guess is no right now and maybe yes in a few years). Whether or not something is a "serious problem" or "deserves criticism", at least for me, seems to point to a substantively different claim. More like, "I agree/disagree with the people who think infinite ethics is a valuable research field". That's not the relevant question.

That makes sense! I was interpreting your post and comment as a bit more categorical than was probably intended. Looking forward to your post.

I agree that your (excellent) analysis shows that the welfare increase is dominated by lifting the bottom half of the income distribution. I agree that this welfare effect is what we want. Pritchett's argument is linked to yours because he claims the only (and therefore best) way to cause this effect is national development. He writes: "all plausible, general, measures of the basics of human material wellbeing [including headcount poverty] will have a strong, non-linear, empirically sufficient and empirically necessary relationship to GDPPC." (Here non-lin... (read more)

8
Karthik Tadepalli
2y
I'm not really interested in dismissing growth as a cause area. (I am annoyed at how little EAs mechanize it beyond "advocate for policies --> ??? --> growth", but I'm going to write that up soon!) I wrote this because I think people who advocate for growth largely ignore inequality and should discount growth heavily because of inequality. If growth still beats targeted interventions after that heavy discounting, then so be it.

I'm confused how this squares with Lant Pritchett's observation that variation in headcount poverty rates across nations, regardless of where you set the poverty line, is completely accounted for by variation in the median of the distribution of consumption expenditures.

0[comment deleted]2y
3
Karthik Tadepalli
2y
Pritchett's argument is about the correlation between average income and poverty rates. My argument is about the welfare that people experience from any given level of growth. I'm claiming that conventional evaluations of growth overestimate the value of growth because they weight income growth of middle-income and rich people too heavily. Once you adjust for that, the population welfare from economic growth is now driven mostly by increase in incomes for poor people, and it is much lower than before (90% lower) If you wanted to value growth solely based on its ability to reduce poverty, an isoelastic utility function does that as well. In the spreadsheet calculations I did, the isoelastic utility penalizes inequality less (24% vs 36%) because the bottom 50%'s income growth of 50% is almost as good on its own as the whole population income growing 90%. Separately, I don't interpret Pritchett's observation as meaning "and therefore the best way to minimize poverty is to maximize median consumption". That doesn't follow at all from a cross-country correlation. For one thing, correlation is not causation and this correlation does not prove that increasing median consumption will decrease poverty. For another thing, we have to consider the costs as well: increasing median consumption through growth could be much more expensive than giving all that money to poor people directly.

All ethical arguments are based on intuition, and here this one is doing a lot of work: "we tend to underestimate the quality of lives barely worth living". To me this is the important crux because the rest of the argument is well-trodden. Yes, moral philosophy is hard and there are no obvious unproblematic answers, and yes, small numbers add up. Tännsjö, Zapffe, Metzinger, and Benatar play this weird trick where they introspectively set an arbitrary line that separates net-negative and net-positive experience, extrapolate it to the rest of humanity, and b... (read more)

I didn't call for a ton more analysis, I pointed that the post largely relies on vibes. There's a difference.

I don’t think asymmetric burden of proof applies when one side is making a positive claim against the current weight of evidence. But I fully agree that more research would be worthwhile.

This is a great post and the most passionate defense I've seen of something like 'improving institutional decision-making', but broader, being an underrated cause area. I'm sympathetic to your ideas on the importance of good leadership, and the lack of it (and of low-trust, low-coordination environments more generally) as a plausible root cause behind many of the problems EAs care about most. However, I don't think this post has the evidence to support your key conclusions, beyond the general intuition that leadership is important.

Some of your thoughts:

  • If
... (read more)
-51
Dem0sthenes
2y
3
Peter Elam
2y
Thank you for the reply Martin!! And I completely agree that I made some large claims without sufficient evidence. That's primarily because I got feedback that the post was very long as-is, and I made a decision not to flesh out the leadership part (which could be a very long post of its own). I just want to be clear that I actually don't want EA to make any significant pivot. I do think that leadership/governance is not discussed by the community to the level of its importance, but I don't know if  corruption/poor governance is a tractable problem for EA (maybe it is, I genuinely don't know). My main recommendation, and what I'm fundamentally arguing for, is that EA become a bigger tent organization that builds ecosystems of altruistic leaders and builders, and that engages with the larger nonprofit community in order to systematically improve it.  Totally agree. Which is why I recommend a series of (relatively) small scale ecosystem building experiments to learn from. As I say, this could be done at low cost, but it does represent a shift from the current strategies that I've seen. I think a lot of these experiments would fail, but the ones that didn't could be quite impactful and could yield some very important insights. But I'm not suggesting a fundamental EA pivot in funding priorities or anything like that. In terms of the governance/corruption stuff. I would just say that the link between good governance and desirable outcomes is a very strong one, and that counterexamples are more an exception to the rule (typically places that are extraordinarily gifted with natural resources like Kuwait). There is of course a lot of evidence to back that up, but here is once piece (Human Development Index vs. Corruption Perception Index). I've heard many people say that the Chinese economic miracle is the largest poverty reduction program in history. That was set in motion (I think pretty much uncontroversially) by a change in leadership from Mao to Deng. Singapore's
1
james.lucassen
2y
Agree  that the impactfulness of working on better government is an important claim, and one you don't provide much evidence for. In the interest of avoiding an asymmetric burden of proof, I want to note that I personally don't have strong evidence against this claim either. I would love to see it further investigated and/or tried out more.

I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we're thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very mu... (read more)

3
saulius
2y
Hmm, maybe you are right. Maybe we can only predict the business-as-usual scenario of humanity where there is economic stagnation with enough clarity to make useful conclusions from those predictions. I guess my only point then is that medium-term strategy like this is a bit less important because the future will probably not be business-as-usual for very long. Well, we could also think about which scenarios lead to the most moral circle expansion for people who might be making decisions impacting the far future. So e.g., maybe expansion of animal advocacy to developing countries is less important because of this consideration? I don't know how strong this consideration is though because I don't how decision-making might look in the future but maybe nobody does. I guess doing many different things (which is what the author suggests) can also be good to prepare for future scenarios we can’t predict.

That’s a good point, at my level thinking about the details of lifetime impact between two good paths might be almost completely intractable. I don’t remember where I first saw that specific idea, it seems like a pretty natural endpoint to the whole EA mindset. And I’ll check out that book, it’s been recommended to me before.

This is a great post and I think this type of thinking is useful for someone who’s specifically debating between working at / founding a small EA organization (that doesn’t have high status outside EA) vs a non-EA organization (or like, Open Phil) early in their career. Ultimately I don’t think it’s that relevant (though still valuable for other reasons) when making career decisions outside this scope, because I don’t think that conflating the EA mission and community is valid. The EA mission is just to do the most good possible; whether or not the communi... (read more)

4
SebastianSchmidt
2y
Excellent comment. I'm mainly considering the first set of options that you're pointing to which means that the mission and community is pretty closely connected. I'm curious, where did you get the "lifetime impact mindset from"? It seemed original to a small group of people so I'm happy that it's used more widely. With that said, very early on I think it's more useful to think in terms of experiments, heuristics, and (maybe) a decade hence because early on most have lot of experience to gather about themselves and the world (although this can still be done within the larger frame of lifetime impac). But I'm starting to move away from early career and have more data and conviction in personal fit so I can make stronger decisions. I can also recommend the podcast with Holden Karnofsky and the book Range by David Epstein.

That Wired article is fantastic. I see this threshold of 5 microns all over the place and it turns out to be completely false and based on a historical accident. It's crazy how once a couple authorities define the official knowledge (in this case, the first few scientists and public health bodies to look at Ward's paper), it can last for generations with zero critical engagement and cause maybe thousands of deaths.

I'm confused about the distinction between fomite and droplet transmission. Is droplet transmission a term reserved for all non-inhalation respi... (read more)

-2
Florin
2y
Don't you mean millions of deaths? From what I've read, fomite transmission must involve surface touching, whereas droplet transmission must involve droplets, which are expelled by coughing or sneezing, directly landing (like a bullet) in your mouth, nose, or eyes without any extra contact or touching. These methods of transmission seem so implausible (how many people actually sneeze or cough directly in someone's face?) to be major causes of spread that it's hard to believe that no one seemed to have performed definitive experiments to test these ideas for many decades. On the other hand, even seemingly definitive experiments (like the rhinovirus study) don't seem able to shift expert opinion. In the case of rhinovirus, maybe one experient isn't enough, but then the question is why no one seems to have been interested in replicating it.

They contradict each other in the sense that your full theory, since it includes the particular consequence that vaporization is chill, is I think not something anyone but a small minority would be fine to live with. Quantum mechanics and atheism impose no such demands. It's not too strong a claim to call this idea fine to live with when you're just going about your daily life, ignoring the vaporization part. "Fine to live with" has to include every consequence, not just the ones that are indeed fine to live with. I interpreted the second quote as arguing ... (read more)

Great post. #9 is interesting because the inverse might also be true, making your idea even stronger: maybe a great thing you can do for the short term is to make the long term go well. X-risk interventions naturally overlap with maintaining societal stability, because 1) a rational global order founded in peace and mutual understanding, which relatively speaking we have today more than ever before, reduces the probability of global catastrophes; and less convincingly 2) a catastrophe that nevertheless doesn’t kill everyone would indefinitely set the remai... (read more)

I think this is pretty strong evidence that Holden and Parfit are p-zombies :)

If you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It'd be chill. 

...

If that's right, "constant replacement" could join a number of other ideas that feel so radically alien (for many) that they must be "impossible to live with," but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing

... (read more)
2
Holden Karnofsky
2y
Both parts you quoted are saying that the notion of personal identity I'm describing is (or at least can be) "fine to live with." You might disagree with this, but I'm not following where the contradiction is between the two. What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here - I don't think anyone wants someone they care about to experience getting shot.) I don't find it obvious that there's something meaningful or important about the "connected conscious experience." If I imagine a future person with my personality and memories, it's not clear to me that this person lacks anything that "Holden a moment from now" has. I don't think death is like sleeping forever, I think it's like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.

Yea, WBE risk seems relatively neglected, maybe because of the really high expectations for AI research in this community. The only article I know talking about it is this paper by Anders Sandberg from FHI. He makes the interesting point that similar incentives that allow animal testing in today's world could easily lead to WBE suffering. In terms of preventing suffering his main takeaway is: 

Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.

T... (read more)