All of Vidur_Kapur's Comments + Replies

What are the best arguments for an exclusively hedonistic view of value?

(Crossposted from FB)

Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it's difficult to measure pleasure/suffering directly, preferences are used as a proxy.

But I aver that we're not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two 'real-world' situations. Some people ma... (read more)

1MichaelStJules2ySo is the idea to ground these tradeoffs in preferences, but consider only conscious preferences about conscious experiences themselves? Furthermore, the degree of pleasantness or suffering would be determined by the strengths of these kinds of preferences (which we might hypothesize to fall on a cardinal scale). If I had just gotten out of an experience machine, I'd be extremely upset. I don't think I would actually get back into the machine, but even if I did, I think this would only be to relieve my suffering. It seems like this framing introduces a different kind of bias. If my experiences in the outside world were really horrible, I'd be motivated to leave it. If the outside world were not so horrible as to drive me to chronic depression or I could accomplish more good outside than inside, I'd stay out.
Why I prioritize moral circle expansion over artificial intelligence alignment

Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.

I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.

I'm in agreement with Michael_S that hedonium and delorium should be... (read more)

3Jacy_Reese4yThat makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.
The marketing gap and a plea for moral inclusivity

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impre... (read more)

The marketing gap and a plea for moral inclusivity

And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.

Why I left EA

I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.

People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sen... (read more)

EAs are not perfect utilitarians

I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.

Ultimately, Singer put it best: do the most good that you can do.

A Different Take on President Trump

The main problem with this post, in my view, is that it's still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.

I agree that Trump's views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he'll likely increase tensions in Asia, and his vie... (read more)

0DavidNash5yIs it that the far right is on the rise, or that the views they held have been dropped by the centre right, and so now they have their own parties that seem larger than they used to be, but the positions they hold don't have as much public support as they did in the past.
What does Trump mean for EA?

Just a few thoughts.

Firstly, Trump's agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.

In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)

While Sentience Politics, the Open Philanthropy Project and some others I may h... (read more)

The need for convergence on an ethical theory

Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.

In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other th... (read more)

Is not giving to X-risk or far future orgs for reasons of risk aversion selfish?

It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.

For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).

If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.

EA != minimize suffering

I disagree that biting the bullet is "almost always a mistake". In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.

Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people's current lives were being lived inside an Experience Machine, 50% of people would want to st... (read more)

0kokotajlod5yI completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they've usually thought about the situation more carefully than people who just go with their intuition. I'm not saying people should just go with their intuition. I'm saying that we don't have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn't lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won't work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts--this bias is not irrational.)
On Priors

I'm very interested in this sort of stuff, though a bit of the maths is beyond me at the moment!

Four free CFAR programs on applied rationality and AI safety

I have a probably silly question about the EuroSPARC program: what if you're in the no man's land between high school and university, i.e. you've just left high school before the program starts?

I know of a couple of mathematically talented people who might be interested (and who would still be in high school), so I'll certainly try and contact them!

3AnnaSalamon6yFolks who haven't started college yet and who are no more than 19 years old are eligible for EuroSPARC; so, yes, your person (you?) should apply :)

This essay by Brian Tomasik addresses this question further, looking at the overall impact of human activities on wild-animal suffering, and includes the effect of factory-farming in the analysis too. Whilst human impact on the environment may lead to a net reduction in wild-animal suffering (if you think that the lives of wild-animals are significantly net-negative), the people whose lives are saved by the Against Malaria Foundation also have little impact on the environment, so also have little impact on the reduction of wild-animal suffering.

2Brian_Tomasik6yThanks for the link! It's plausible that those saved from malaria have lower-than-average environmental impact, but their impact is not trivial. This section [] mentions some ways in which poverty might actually increase a person's environmental impact. This section [] discusses AMF as a potential way to reduce insect suffering. I added a paragraph [] specifically about Malawi because Buck mentioned that country. I'm interested in finding someone to research the net impact of AMF on insect suffering more thoroughly. :)

Thanks for the post. I'm somewhat less confident in the meat-eater problem being a problem as a result of it, maybe for different reasons though. I still think that it is overall a problem, however. I'll just put my initial thoughts below.

It’s also plausible that interventions that raise incomes, like deworming, have a lower impact on meat consumption because they don’t raise the overall number of humans that would be eating meat over their entire lifetime.

The effect of raising income itself will still tend to increase meat consumption, though. There w... (read more)

1scottweathers6yThanks, Vidur! I didn't make it totally clear, but I don't think that individuals should split their donations. The main argument that I'm trying to make is that the distribution of our donations across the EA movement are heavily human-centered, and that's a mistake based on expected value. I didn't want to dive too deep into this but that's the claim I was trying to make. Broadly speaking, I'd like to see a much higher proportion of our dollars go to animal organizations. I could see this being fixed from a decent sized group of people moving their donations over or a major organization like OPP fixing it.
Effective Altruism and ethical science

I agree - it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.

Similarly, we could ask "why satisfy my own preferences?", but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.

0AlexGab6yYou don't really have a choice but to satisfy your own preferences. Suppose you decide to stop satisfying your preferences. Well, you've just satisfied your preference to stop satisfying your preferences. So the answer to the question is that it's logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you're satisfying your preference when you act on it.
0RobertFarq6y"why satisfy my own preferences?" That's the lynch pin. You don't have to. You can be utterly incapable of actually following through on what you've deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too) There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences - they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don't value health and longevity on the basis of their outward behaviour. This doesn't make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences. Asking the further questions of, "why satisfy my own preferences?", or "what act in a logically consistent fashion?", just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold - you can go nowhere from there. "Why trust my sense data are sometimes veridical?" don't have to, but you'd be mad not to.
Effective Altruism and ethical science

I agree with Squark - it's only when we've already decided that, say, saving lives is important that we create health systems to do just that.

But, I agree with the point that EA is not doing anything different to society as a whole - particularly healthcare - in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.

More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can't suddenly dec... (read more)

0MichaelDello6y"it's only when we've already decided that, say, saving lives is important that we create health systems to do just that." But no one pays any credence to the few who argue that we shouldn't value saving lives, we don't even shrug and say 'that's their opinion, who am I to say that's wrong?', we just say that they are wrong. Why should ethics be any different?
Doing Good Better - Book review and comments

I'm in agreement with you on the meat consumption issue: morality doesn't begin and end with meat consumption, but it's better to donate lots to effective animal charities and be vegan, as opposed to offsetting one's meat consumption or having fancy vegan meals and being vegan. This seems to be the standard utilitarian stance. That's without taking into account the benefits of being vegan in terms of flow-through effects too, which have been discussed on this forum before. Personally, after having become essentially vegan, my family has had to reduce its m... (read more)

Population ethics: In favour of total utilitarianism over average

I approach utilitarianism more from a framework that, logically, I should be maximising the preference-satisfaction of others who exist or will exist, if I am doing the same for myself (which it is impossible not to do). So, in a sense, I don't believe that preference-satisfaction is good in itself, meaning that there's no obligation to make satisfied preferrers, just preferrers satisfied. I still assign some weight to the total view, though.

Population ethics: In favour of total utilitarianism over average

Interesting piece. I too reject the average view, but I'm currently in favour of prior-existence preference utilitarianism (the preferences of currently existing beings and beings who will exist in the future matter, but extinction, say, isn't bad because it prevents satisfied people from coming into existence) over the total view. I find it to be quite implausible that people can be harmed by not coming into existence, although I'm aware that this leads to an asymmetry, namely that we're not obligated to bring satisifed beings into existence but we're obl... (read more)

2casebash6yYou aren't harmed by not being brought into existence, but there is an opportunity cost, that is, if you would have lived a life worth living, that utility is lost.
Quantifying the Impact of Economic Growth on Meat Consumption

Thank you for this - I found it to be very useful. While I recognise the PR issue, I think it's also very important to explore all areas when it comes to cause-prioritization.

Are GiveWell Top Charities Too Speculative?

Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I'm making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it's global poverty or domesticated animal welfare) will have on it.

General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotio... (read more)

1MichaelDickens6ySuppose we invest more into researching wild animal suffering. We might become somewhat confident that an intervention is valuable and then implement it, but this intervention turns out to be extremely harmful. WAS is sufficiently muddy that interventions might often have the opposite of the desired effect. Or perhaps research leads us to conclude that we need to halt space exploration to prevent people from spreading WAS throughout the galaxy, but in fact it would be beneficial to have more wild animals, or we would terraform new planets in a way that doesn't cause WAS. Or, more likely, the research will just accomplish nothing.
EA's Image Problem

I think this is an excellent post. The point about unnecessary terminology from philosophy and economics is certainly one I've thought about before, and I like the suggestion about following Orwell's rules.

On the use of the term rational, I think it can be used in different ways. If we're celebrating Effective Altruism as a movement which proceeds from the assumption that reason and evidence should be used to put our moral beliefs into action, then I think the use of the term is fine, and indeed is one of the movement's strong points which will attract peo... (read more)

Political Debiasing and the Political Bias Test

Interesting test. I scored quite low in terms of political bias, but there's certainly a temptation to correct or over-correct for your biases when you're finding it very hard to choose between the options.

EA introduction course and YouTube playlists

A discussion of moral philosophy may be important not only because morality is integral to EA in general, but because it illustrates how the movement is suitable for people with wildly different views on morality, from utilitarians/consequentialists to deontologists to those who take a religious view.

I'd say that this video of Peter Singer is quite a good, short overview of cause prioritization.

Should I be vegan?

Very detailed!

I'm currently in between lacto-ovo vegetarianism and veganism in that I'm a lacto-vegetarian. This is only because I don't currently have a regular income (I'm still in high school), and attempting to replace dairy in particular has been quite an inconvenience.

So, my experience is that it is a lot less inconvenient to give up eggs than to give up dairy products, so perhaps you could try lacto-vegetarianism, but seeing as you are willing to go "95% vegan" and potentially "100% vegan", they're probably better in consequentialist terms overall.

2Jess_Whittlestone6yYeah, I think lacto-vegetarianism is probably 95% of the way in terms of impact on animal suffering anyway (or even more.) As I said above, for me the main reason for cutting out dairy too is that I think if I eat dairy I might be more likely to slip into eating eggs too down the line. But it's possible I could just protect against that by setting more solid rules in place etc.
Should altruism be selfless?

I've seen criticisms of effective altruism in which effective altruists have been criticised for supposedly donating a large proportion of their income simply to improve their image and make themselves look better. On that basis, it could be argued that EA should have a closer relationship to Maximum Selflessness, but even then, people could still accuse EAs of "being selfless" in order to improve their image.

On the other hand, if EA were centred around the concept of Maximum Selflessness, it could be perceived as too demanding. But, if the self... (read more)

Saving the World, and Healing the Sick

Thank you for giving a realistic account of what it's like to be a doctor.

I'm considering studying medicine, so this was very helpful!