Vidur_Kapur

Vidur_Kapur's Posts

Sorted by New

Vidur_Kapur's Comments

What are the best arguments for an exclusively hedonistic view of value?

(Crossposted from FB)

Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it's difficult to measure pleasure/suffering directly, preferences are used as a proxy.

But I aver that we're not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two 'real-world' situations. Some people may be willing to take five minutes of having a dust-speck in the eye for ten minutes of eating delicious food, whereas others may only be willing to take 30 seconds of the dust-speck. It's likely that, when we are asked to do this, we aren't considering the pleasure and suffering on their own, but taking other things into consideration too (perhaps thinking about our memories of similar situations in the past). The variance may also arise because a speck of dust in the eye *will* cause some people to suffer more than others.

Ideally, we'd be able to just consider the pleasure and the suffering on their own. That's very difficult to do, though. I think there are right answers to these tradeoff questions, but that our brains aren't able to answer the questions precisely enough. But in extreme cases, the hedonistic utilitarian could argue that anyone who would rather not have a blissful life at all, if it comes at the cost of being pricked by a pin, is simply wrong. It is the pleasure and the suffering that matter, no matter what people *say* they prefer. (See the 'Future Tuesday Indifference' argument promulgated by Parfit and Singer).

Sidgwick's definition of pleasure is after all "a feeling which the sentient individual at the time of feeling it implicitly or explicitly apprehends to be desirable – desirable, that is, when considered merely as feeling." The feeling, as it were, cannot be unfelt, even if an individual makes certain claims about the desirability (or lack thereof) of the feeling later on.

On that note, have you read Derek Parfit's 'On What Matters' (particularly Parts 1 and 6, in Volumes One and Two respectively)? In my view, he makes some convincing arguments against preference-based theories. Singer and de-Lazari Radek, in 'The Point of View of the Universe', build on his arguments to mount a defence of hedonistic utilitarianism against other normative theories, including preference utilitarianism.

Moral realists who endorse hedonistic utilitarianism, such as Singer, posit that the very nature of what Sidgwick describes as pleasure gives us reason to increase it, and that nothing else in the universe gives us similar reasons.

The experience machine is another example of where hedonistic utilitarians would postulate that people's preferences are plagued by bias. Joshua Greene and Peter Singer have both argued that people's objections to entering the experience machine are the result of status quo bias, for instance.

See: https://www.tandfonline.com/doi/abs/10.1080/09515089.2012.757889?journalCode=cphp20 and https://en.wikipedia.org/wiki/Experience_machine#Counterarguments

Why I prioritize moral circle expansion over artificial intelligence alignment

Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.

I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.

I'm in agreement with Michael_S that hedonium and delorium should be the most important considerations when we're estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn't necessarily mean that we should focus on AIA over MCE (I don't), but it does make it more likely that we should.

Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.

The marketing gap and a plea for moral inclusivity

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impression to fade, it seems to me.

While I do agree that it's likely that a marketing gap is perceived by a good number of newcomers (based solely on my intuition), do we have any solid evidence that such a marketing gap is perceived by newcomers in particular?

Or is it mainly perceived by more 'experienced' EAs (many of whom may prioritise causes other than global poverty) who feel as if sufficient weight isn't being given to other causes, or who feel guilty for giving a misleading impression relative to their own impressions (which are formed from being around others who think like them)? If the latter, then the marketing gap may be less problematic, and will be less likely to blow up in our faces.

The marketing gap and a plea for moral inclusivity

And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.

Why I left EA

I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.

People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sentient beings in a speciesist manner or in a manner which discriminates against potential future beings. At least, that's the strong form of EA. This doesn't require one to be a moral realist, though it is very close to utilitarianism.

If I'm understanding this post correctly, the "weak form" of EA - donating more and donating more effectively to causes you already care about, or even just donating more effectively given the resources you're willing to commit - is not unique enough for Lila to stay. I suspect, though, that many EAs (particularly those who are only familiar with the global poverty aspect of EA) only endorse this weak form, but the more vocal EAs are the ones who endorse the strong form.

EAs are not perfect utilitarians

I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.

Ultimately, Singer put it best: do the most good that you can do.

A Different Take on President Trump

The main problem with this post, in my view, is that it's still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.

I agree that Trump's views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he'll likely increase tensions in Asia, and his views on climate change seem to me to be a major risk.

In terms of values and opinion polls, immigrants to Western nations have better attitudes than people from their native countries. Furthermore, immigrants when they return to their native countries often take back the values and norms of their host countries. I'm not saying this to make a judgement on whether immigration on this scale is good or bad, just to make the point that our aim is to make the world a better place, not to decrease crime rates in Europe.

That said, far-right extremists are on the rise in both the United States and in Europe (thanks in part to irrational overreactions and hyperbolic statements like law and order is breaking down, which is just patently false as others have said, and thanks in part due to a number of false beliefs about immigration and immigrants themselves, Muslim or not) and I think that one way to stop them from taking power in elections and from attacking immigrants, refugees and others is to give them the sense that they have control over 'their' borders; in other words, tactically retreating on the issue of immigration may well be a good thing. Did we need to elect Trump, with all of the risks that come with his Presidency, in order to do that?

I don't know, but I do know that Trump has been elected now, and that many of his stated policies are terrible, and if individual EAs think that trying to change the policies of the Trump administration from the inside would be an effective thing to do (as Peter Singer has suggested) then I'd say that's plausibly true for a small number of EAs.

I think, in general, it's true that a small number of EAs going into party politics would be an effective thing to do, over and above the policy-change focus which already exists in the EA community and some of its organisations, but that this should be done on an individual basis: EA-affiliated groups and organisations should not get involved in party-politics.

What does Trump mean for EA?

Just a few thoughts.

Firstly, Trump's agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.

In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)

While Sentience Politics, the Open Philanthropy Project and some others I may have missed do take part in political activities, they focus on specific policies, and I suspect that what some people are talking about would involve a systematic attempt to engage in party-politics.

I think that even without Trump, the idea of having a very small number of individual EAs (maybe 1/1000 EAs) going inside politics and trying to influence administrations or even become politicians was a good one.

But, a systematic attempt to engage in party-politics would not be a good idea, partly because, even in the EA community, focusing on party-politics or even on controversial policies seems to lead to less willingness to consider other points of view.

And, partly because influencing administrations or becoming a politician on one's own is more likely to make a difference than engaging in regular party-political campaigning, even though becoming a politician or influencing an administration is less easy to do.

Finally, I think that politics is very important, because you could potentially reduce existential risks as well as spread good values and ensure that humanity is on the right course in the future, and therefore there's not a tension between reducing existential risks and values-spreading.

However, in order for any politicians or political advisors to be able to steer humanity in a positive direction, you need public and corporate support for it, which is why I believe that spreading anti-speciesism, working on farmed animal suffering, and so on, remains highly important too.

Overall, Trump's election has not influenced my beliefs significantly.

The need for convergence on an ethical theory

Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.

In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other theory he finds most plausible, the Act Utilitarianism of Singer and De-Lazari Radek.

Anyone trying to work on convergence should probably follow the fruitful debate surrounding 'On What Matters'.

Is not giving to X-risk or far future orgs for reasons of risk aversion selfish?

It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.

For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).

If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.

Load More