I'm somewhat less optimistic; even if most would say that they endorse this view, I think many "dedicated EAs" are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won't go away entirely just by endorsing an abstract philosophical argument. (And I'm not sure if "most" endorse that argument to begin with.)
Fair point - the "we" was something like "people in general".
This makes IRV a really bad choice. IRV results in a two-party system just like plurality voting does.
I agree that having a multi-party system might be most important, but I don't think IRV necessarily leads to a two-party system. For instance, French presidential elections feature far more than two parties (though they're using a two-round system rather than IRV).
Everything is subject to tactical voting (except maybe SODA? but I don't understand that argument). So I don't see this as a point against approval voting in particular.
I think that approval voting has significantly more serious tactical voting problems than IRV. Sure, they all violate some criteria, but the question is how serious the resulting issues are in practice. IRV seems to be fine based on e.g. Australia's experience. (Of course, we don't really know how good or bad approval voting would be, because it is rarely used in competitive elections.)
Great post - thanks a lot for writing this up!
It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: "So you would torture a child in order to create ten new brains that experience extremely intense orgasms?" The politician, being honest, says yes, and that's the end of her career.
By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I'm not saying anything about which standards are better, and of course this does not only apply to CU.)
The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n.
The fact that I consider a certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F.
Also, this principle can have pretty strange implications depending on how you apply it. For instance, if I look at the population of all beings on Earth, it is extremely surprising (10^-12 or so) that I am a human rather than an insect.
I’m at a period of unusually high economic growth and technological progress
I think it's not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability.
So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no.
I'm very sympathetic to patient philanthropy, but this seems to overstate the required amount of evidence. Taking into account that each time has donors (and other resources) of their own, and that there are diminishing returns to spending, you don't need to have extreme beliefs about your elevated influentialness to think that spending now is better. However, the arguments you gave are not very specific to 2020; presumably they still hold in 2100, so it stands to reason that we should invest at least over those timeframes (until we expect the period of elevated influentialness to end).
One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good?
A bag of oats is presumably much more relative wealth in those other times than now. The current price of a ton of oats is GBP 120 per ton, so if the bag contains 50 kg, it's worth just GBP 6.
People in earlier times also have less 'competition'. Presumably the medieval person could have been the first to write up arguments for antispeciesism or animal welfare; or perhaps they could have a significant impact on establishing science, increasing rationality, improving governance, etc.
(All things considered, I think it's not clear if earlier times are more or less influential.)
I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.
Yeah, but even 30 year interest rates are low (1-2% at the moment). There is an Austrian 100 year bond paying 0.88%. I think that is significant evidence that something about the "patient vs impatient actors" story does not add up.
It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)
I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.
Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements). Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant evidence that at least an extreme version of this is not happening.Re: 3., I think the levels of engagement and feedback we have received so far are encouraging. However, we do not currently have any procedures in place to measure impact, which is (as you say) incredibly hard for what we do. But of course, we are constantly thinking about what kind of work is most impactful!