All of Vidur Kapur's Comments + Replies

Hi Stephen. I’m also lacto-vegetarian. I take Vitamin D supplements (mainly for the reasons that they’re recommended for everyone) and an occasional Vitamin B complex or B3 supplement. I’ve considered taking algae-based Omega-3 supplements (in the form of DHA and EPA) but I don’t think the evidence is strong enough to justify the expense. My iron levels have consistently been fine without supplementation. I’ve found VeganHealth.org to be useful (I’d vouch for the quality of their evidence reviews). Ginny Messina is also worth reading (https://www.theveganr... (read more)

In addition to Fin's considerations and the excellent post by Jacy Anthis, I find Michael Dickens' analysis to be useful and instructive. What We Owe The Future also contains a discussion of these issues. 

1
jackchang110
1y
Thanks for your sharing very much, when I read this, I feel it's a little unnatural, weird. If we discuss long-term future, humans might face aliens, another civilization,superintelligence... Humans' personality may change by evolution. I feel like the prediction is too subjective.

Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.

5
Habryka
1y
Just to clarify, I am a utilitarian, approximately, just not a hedonic utilitarian.

That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.

6
Habryka
1y
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.  Like, I think the correct defense is to just be straightforward and say "look, I think different people are basically worth the same, since cognitive variance just isn't that high". I just don't think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it's not guaranteed). I personally don't find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work  Rethink is doing since I still think it helps me think about how to answer this question in-general. 

Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstr... (read more)

I’m sorry to hear that you’ve been feeling this way, Linch. I’ve also been facing some of the difficulties that you describe. I’ll try to do the best I can but would welcome the input of people who are more knowledgeable than me!

In the professional work of the English Utilitarians, what stands out to me is perseverance. When Bentham’s Panopticon project (which was meant to be an improvement on the often cruel treatment of prisoners) failed to get off the ground, he moved on to other things such as education reform (advocating for an end to corporal punishm... (read more)

There has historically been some overlap between the charities that Open Phil and the Animal Welfare Fund have supported, and ACE's recommendations, which suggests that there is a degree of consensus. See also the discussion here, in which some endorse the changes that ACE has made to its methodology. 

(Crossposted from FB)

Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it's difficult to measure pleasure/suffering directly, preferences are used as a proxy.

But I aver that we're not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two 'real-world' situations. Some people ma... (read more)

1
MichaelStJules
4y
So is the idea to ground these tradeoffs in preferences, but consider only conscious preferences about conscious experiences themselves? Furthermore, the degree of pleasantness or suffering would be determined by the strengths of these kinds of preferences (which we might hypothesize to fall on a cardinal scale). If I had just gotten out of an experience machine, I'd be extremely upset. I don't think I would actually get back into the machine, but even if I did, I think this would only be to relieve my suffering. It seems like this framing introduces a different kind of bias. If my experiences in the outside world were really horrible, I'd be motivated to leave it. If the outside world were not so horrible as to drive me to chronic depression or I could accomplish more good outside than inside, I'd stay out.

Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.

I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.

I'm in agreement with Michael_S that hedonium and delorium should be... (read more)

8
Jacy
6y
That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impre... (read more)

And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.

I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.

People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sen... (read more)

I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.

Ultimately, Singer put it best: do the most good that you can do.

The main problem with this post, in my view, is that it's still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.

I agree that Trump's views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he'll likely increase tensions in Asia, and his vie... (read more)

0
DavidNash
7y
Is it that the far right is on the rise, or that the views they held have been dropped by the centre right, and so now they have their own parties that seem larger than they used to be, but the positions they hold don't have as much public support as they did in the past.

Just a few thoughts.

Firstly, Trump's agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.

In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)

While Sentience Politics, the Open Philanthropy Project and some others I may h... (read more)

Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.

In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other th... (read more)

It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.

For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).

If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.

I disagree that biting the bullet is "almost always a mistake". In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.

Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people's current lives were being lived inside an Experience Machine, 50% of people would want to st... (read more)

0
kokotajlod
8y
I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they've usually thought about the situation more carefully than people who just go with their intuition. I'm not saying people should just go with their intuition. I'm saying that we don't have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn't lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won't work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts--this bias is not irrational.)

I'm very interested in this sort of stuff, though a bit of the maths is beyond me at the moment!

I have a probably silly question about the EuroSPARC program: what if you're in the no man's land between high school and university, i.e. you've just left high school before the program starts?

I know of a couple of mathematically talented people who might be interested (and who would still be in high school), so I'll certainly try and contact them!

3
AnnaSalamon
8y
Folks who haven't started college yet and who are no more than 19 years old are eligible for EuroSPARC; so, yes, your person (you?) should apply :)

This essay by Brian Tomasik addresses this question further, looking at the overall impact of human activities on wild-animal suffering, and includes the effect of factory-farming in the analysis too. Whilst human impact on the environment may lead to a net reduction in wild-animal suffering (if you think that the lives of wild-animals are significantly net-negative), the people whose lives are saved by the Against Malaria Foundation also have little impact on the environment, so also have little impact on the reduction of wild-animal suffering.

2
Brian_Tomasik
8y
Thanks for the link! It's plausible that those saved from malaria have lower-than-average environmental impact, but their impact is not trivial. This section mentions some ways in which poverty might actually increase a person's environmental impact. This section discusses AMF as a potential way to reduce insect suffering. I added a paragraph specifically about Malawi because Buck mentioned that country. I'm interested in finding someone to research the net impact of AMF on insect suffering more thoroughly. :)

Thanks for the post. I'm somewhat less confident in the meat-eater problem being a problem as a result of it, maybe for different reasons though. I still think that it is overall a problem, however. I'll just put my initial thoughts below.

It’s also plausible that interventions that raise incomes, like deworming, have a lower impact on meat consumption because they don’t raise the overall number of humans that would be eating meat over their entire lifetime.

The effect of raising income itself will still tend to increase meat consumption, though. There w... (read more)

1
scottweathers
8y
Thanks, Vidur! I didn't make it totally clear, but I don't think that individuals should split their donations. The main argument that I'm trying to make is that the distribution of our donations across the EA movement are heavily human-centered, and that's a mistake based on expected value. I didn't want to dive too deep into this but that's the claim I was trying to make. Broadly speaking, I'd like to see a much higher proportion of our dollars go to animal organizations. I could see this being fixed from a decent sized group of people moving their donations over or a major organization like OPP fixing it.

I agree - it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.

Similarly, we could ask "why satisfy my own preferences?", but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.

0
AlexGab
8y
You don't really have a choice but to satisfy your own preferences. Suppose you decide to stop satisfying your preferences. Well, you've just satisfied your preference to stop satisfying your preferences. So the answer to the question is that it's logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you're satisfying your preference when you act on it.
0
RobertFarq
8y
"why satisfy my own preferences?" That's the lynch pin. You don't have to. You can be utterly incapable of actually following through on what you've deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too) There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences - they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don't value health and longevity on the basis of their outward behaviour. This doesn't make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences. Asking the further questions of, "why satisfy my own preferences?", or "what act in a logically consistent fashion?", just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold - you can go nowhere from there. "Why trust my sense data are sometimes veridical?" ...you don't have to, but you'd be mad not to.

I agree with Squark - it's only when we've already decided that, say, saving lives is important that we create health systems to do just that.

But, I agree with the point that EA is not doing anything different to society as a whole - particularly healthcare - in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.

More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can't suddenly dec... (read more)

0
MichaelDello
8y
"it's only when we've already decided that, say, saving lives is important that we create health systems to do just that." But no one pays any credence to the few who argue that we shouldn't value saving lives, we don't even shrug and say 'that's their opinion, who am I to say that's wrong?', we just say that they are wrong. Why should ethics be any different?

I'm in agreement with you on the meat consumption issue: morality doesn't begin and end with meat consumption, but it's better to donate lots to effective animal charities and be vegan, as opposed to offsetting one's meat consumption or having fancy vegan meals and being vegan. This seems to be the standard utilitarian stance. That's without taking into account the benefits of being vegan in terms of flow-through effects too, which have been discussed on this forum before. Personally, after having become essentially vegan, my family has had to reduce its m... (read more)

I approach utilitarianism more from a framework that, logically, I should be maximising the preference-satisfaction of others who exist or will exist, if I am doing the same for myself (which it is impossible not to do). So, in a sense, I don't believe that preference-satisfaction is good in itself, meaning that there's no obligation to make satisfied preferrers, just preferrers satisfied. I still assign some weight to the total view, though.

Interesting piece. I too reject the average view, but I'm currently in favour of prior-existence preference utilitarianism (the preferences of currently existing beings and beings who will exist in the future matter, but extinction, say, isn't bad because it prevents satisfied people from coming into existence) over the total view. I find it to be quite implausible that people can be harmed by not coming into existence, although I'm aware that this leads to an asymmetry, namely that we're not obligated to bring satisifed beings into existence but we're obl... (read more)

2
Chris Leong
8y
You aren't harmed by not being brought into existence, but there is an opportunity cost, that is, if you would have lived a life worth living, that utility is lost.

Thank you for this - I found it to be very useful. While I recognise the PR issue, I think it's also very important to explore all areas when it comes to cause-prioritization.

Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I'm making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it's global poverty or domesticated animal welfare) will have on it.

General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotio... (read more)

1
MichaelDickens
8y
Suppose we invest more into researching wild animal suffering. We might become somewhat confident that an intervention is valuable and then implement it, but this intervention turns out to be extremely harmful. WAS is sufficiently muddy that interventions might often have the opposite of the desired effect. Or perhaps research leads us to conclude that we need to halt space exploration to prevent people from spreading WAS throughout the galaxy, but in fact it would be beneficial to have more wild animals, or we would terraform new planets in a way that doesn't cause WAS. Or, more likely, the research will just accomplish nothing.

I think this is an excellent post. The point about unnecessary terminology from philosophy and economics is certainly one I've thought about before, and I like the suggestion about following Orwell's rules.

On the use of the term rational, I think it can be used in different ways. If we're celebrating Effective Altruism as a movement which proceeds from the assumption that reason and evidence should be used to put our moral beliefs into action, then I think the use of the term is fine, and indeed is one of the movement's strong points which will attract peo... (read more)

Interesting test. I scored quite low in terms of political bias, but there's certainly a temptation to correct or over-correct for your biases when you're finding it very hard to choose between the options.

A discussion of moral philosophy may be important not only because morality is integral to EA in general, but because it illustrates how the movement is suitable for people with wildly different views on morality, from utilitarians/consequentialists to deontologists to those who take a religious view.

I'd say that this video of Peter Singer is quite a good, short overview of cause prioritization.

Very detailed!

I'm currently in between lacto-ovo vegetarianism and veganism in that I'm a lacto-vegetarian. This is only because I don't currently have a regular income (I'm still in high school), and attempting to replace dairy in particular has been quite an inconvenience.

So, my experience is that it is a lot less inconvenient to give up eggs than to give up dairy products, so perhaps you could try lacto-vegetarianism, but seeing as you are willing to go "95% vegan" and potentially "100% vegan", they're probably better in consequentialist terms overall.

2
Jess_Whittlestone
9y
Yeah, I think lacto-vegetarianism is probably 95% of the way in terms of impact on animal suffering anyway (or even more.) As I said above, for me the main reason for cutting out dairy too is that I think if I eat dairy I might be more likely to slip into eating eggs too down the line. But it's possible I could just protect against that by setting more solid rules in place etc.

I've seen criticisms of effective altruism in which effective altruists have been criticised for supposedly donating a large proportion of their income simply to improve their image and make themselves look better. On that basis, it could be argued that EA should have a closer relationship to Maximum Selflessness, but even then, people could still accuse EAs of "being selfless" in order to improve their image.

On the other hand, if EA were centred around the concept of Maximum Selflessness, it could be perceived as too demanding. But, if the self... (read more)

Thank you for giving a realistic account of what it's like to be a doctor.

I'm considering studying medicine, so this was very helpful!