TO

Toby_Ord

1496 karmaJoined Aug 2014

Comments
80

Toby_Ord
2mo104

Thanks Vasco,

Interesting analysis. Here are a few points in response:

  • It is best to take my piece as an input into a calculation of whether voting is morally justified on account of changing the outcome — it is an input in the form of helping work out the probability the outcome gets changed. More analysis would be needed to make the overall moral case — especially in the many voting systems that have multiple levels, where it may be much more important to vote in marginal seats and much less in safe seats, so taking the average may be inappropriate.
  • You make a good point that the value depends on who it is and their counterfactuals. Most people looking at this are trying to work out the average value to defend against claims that voting is not typically morally justified, rather than trying to work out the case for particular groups such as EAs — though that is a relevant group for this forum.
  • In such empirical arguments, I'd be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
  • While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
  • I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I don't think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you don't do the things those people see as basic moral responsibilities).

Something like that. Geoffrey Brennan and Lomasky indeed present the binomial formula and suggest using it in their earlier work, but I haven't found a case of them using it in any particular way (which could get results like Jason Brennan's or results like Banzhaf's), so didn't want to pin this on them. So I cited Jason Brennan whose uses it to produce these crazily low probabilities in his book. It is possible that Jason Brennan didn't do the calculations himself and that someone else did (either Geoffrey Brennan and Lomasky or others), but I don't know and haven't found an earlier source for the crazy numbers.

Toby_Ord
2mo118

A caution re interpreting of my argument in two-level elections:

One might read the above piece as an argument that voting is generally worthwhile. But note that the two-level structure of many elections (at least in countries without PR) does dampen the value of voting for many voters. e.g. if you are in the 10%+ of the US population who live in California, then not only are you very unlikely to cast a decisive vote to win the state's electoral college votes (since the probability that the underdog wins is very low), but it is very likely that in the situation where California comes down to a single vote, the rest of the country has skewed overwhelmingly to the Republicans, making the Californian electoral college votes irrelevant. Similar situations hold for safe seats in the lower house in the US, UK or Australia. 

It might still be that in some sense two-level elections function on average like a single level election, but even if so, that could be because there are some people in marginal seats/states with disproportionate chances of changing the outcome, while many or most people have very little.

So while my adjusted formula above does apply in two-level elections, the intuitive interpretation that it supports a moral case for voting for the superior candidate may not apply.

I've just seen your comment further down:

What we’re arguing for is a criterion: governments should fund all those catastrophe-preventing interventions that clear the bar set by cost-benefit analysis and altruistic willingness to pay. One justification for funding these interventions is the justification provided by CBA itself, but it need not be the only one. If longtermist justifications help us get to the place where all the catastrophe-preventing interventions that clear the CBA-plus-AWTP bar are funded, then there’s a case for employing those justifications too.

which answers my final paragraph in the parent comment, and suggests that we are not too far apart.

we chose ‘unacceptable’ because we also think there would be something normatively problematic about it.

I'm not so sure about that. I agree with you that it would be normatively problematic in the paradigm case of a policy that imposed extreme costs on current society for very slight reduction in total existential risk — let's say, reducing incomes by 50% in order to lower risk by 1 part in 1 million.

But I don't know that it is true in general.

First, consider a policy that was inefficient but small — e.g. one that cost $10 million to the US govt, but reduced the number of statistical lives lost in the US by only 0.1, I don't think I'd say that this was democratically unacceptable. Policies like this are enacted all the time in safety contexts and are often inefficient and ill-thought-out, and I'm not generally in favour of them, but I don't find them to be undemocratic. I suppose one could argue that all US policy that doesn't pass a CBA is undemocratic (or democratically unacceptable), but that seems a stretch to me. So I wonder whether it is correct to count our intuitions on the extreme example as counting against all policies that are inefficient in traditional CBA terms or just against those that impose severe costs.

Thanks Elliott,

I guess this shows that the case won't get through with the conservative rounding off that you applied here, so future developments of this CBA would want to go straight for the more precise approximations in order to secure a higher evaluation.

Re the possibility of international agreements, I agree that they can make it easier to meet various CBA thresholds, but I also note that they are notoriously hard to achieve, even when in the interests of both parties. That doesn't mean that we shouldn't try, but if the CBA case relies on them then the claim that one doesn't need to go beyond it (or beyond CBA-plus-AWTP) becomes weaker.

That said, I think some of our residual disagreement may be to do with me still not quite understanding what your paper is claiming. One of my concerns is that I'm worried that CBA-plus-AWTP is a weak style of argument — especially with elected politicians. That is, arguing for new policies (or treaties) on grounds of CBA-plus-AWTP has some sway for fairly routine choices made by civil servants who need to apply government cost-effectiveness tests, but little sway with voters or politicians. Indeed, many people who would be benefited by such cost-effectiveness tests are either bored by — or actively repelled by — such a methodology. But if you are arguing that we should only campaign for policies that would pass such a test, then I'm more sympathetic. In that case, we could still make the case for them in terms that will resonate more broadly.

Toby_Ord
2mo115

Ah, that's what I meant by the value your candidate would bring. There isn't any kind of neutral outcome to compare them against, so I thought it clear that it meant in comparison to the other candidate. Evidently not so clear!

Toby_Ord
2mo6335

I should note that I don’t see a stronger focus on character as the only thing we should be doing to improve effective altruism! Indeed, I don’t even think it is the most important improvement. There have been many other suggestions for improving institutions, governance, funding, and culture in EA that I’m really excited about. I focused on character (and decision procedures) in my talk because it was a topic I hadn’t seen much in online discussions about what to improve, because I have some distinctive expertise to impart, and because it is something that everyone in EA can work on.

Toby_Ord
3mo6-1

I think you have put your finger on a key aspect with the coldness requirement. 

When ice cream is melted or coke is lukewarm, they both taste far too sweet. I've long had a hypothesis that we evolved some kind of rejection of foods that taste too sweet (at least in large quantity) and that by cooling them down, they taste less sweet (overcoming that rejection mechanism) but we still get increased reward when the sugar content enters our bloodstream. I feel that carbonation is similar (flat coke tastes too sweet), so that the cold and carbonation could be hacks we've discovered to get around the 'tastes too sweet' defence mechanism, while still enjoying extremely high blood sugar based rewards. (Other forms of bitterness or saltiness added to the sweet foods could be similar.)

More speculative and still requires a few sentences to explain though, so a different example may be best.

Load more