bezurli

Posts

Sorted by New

Wiki Contributions

Comments

Open Thread: Winter 2021

Thanks for the comment. I really hadn't considered colonizing the stars and bringing animals.

Open Thread: Winter 2021

Hey, everyone. I don't post here often and I'm not particularly knowledgeable about strong longtermism, but I've been thinking a bit about it lately and wanted to share a thought I haven't seen addressed yet and I was wondering if it’s reasonable and unaddressed. I’m not sure this is the right place though, but here goes.

It seems to me that strong longtermism is extremely biased towards human beings.

In most catastrophic risks I can imagine (climate change, AI misalignment, and maybe even nuclear war* or pandemics**), it seems unlikely that earth would become uninhabitable for a long period or that all life on earth would be disrupted.

Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?

Although I definitely am somewhat biased towards human beings and think existential risk is a very important cause, I wonder if this critique makes sense.

 

*Regarding nuclear war, I guess it would depend on the length and strength of the radioactivity, which is not a subject I’m familiar with.

**From what I’ve learned in the last year and a half, it wouldn’t be easy for viruses (not sure about bacteria) to infect lots of different species (covid-19 doesn’t seem to be a problem to other species). 

Growth and the case against randomista development

But should we make people want pro-growth policies? I'm rather sceptic that there is a positive expected outcome from influencing certain politics. In the end, founding a think tank that lobbies in favor of development policies is, in a way, to believe we know better than development country voters themselves what is best for them (assuming we're talking about functional democracies).

Although that line of argument may be attractive for a few reasons already mentioned on the forum (because people don't trust institutions, because they lack basic education, because their education is leftist-biased etc), I'd argue that's a very strong and probably wrong caveat.

Given that growth economics is a controversial subject, for the sake of argument let's assume that, after thorough research, we could be 80% sure that Party X would be better for GDP growth than Party Y. Are we really sure that voters don't know what's best for them with an 80% confidence interval?

Even if that were true, I'm not sure a pro-growth think tank would be the best course of action. Maybe voters were "wrong" because of malfunctioning elections or low voter turnout. In that case, I think it would be best to advocate in favor of better-functioning elections and increasing voter turnout.

In my opinion, if we disagree with voters about what's best for them, it's far more likely that we're wrong. In a sense, that's also the argument behind providing cash transfers - should be oblige people to spend money on what we think is right for them or simply give them the cash and trust they'll know its best use?

This may be interpreted as a general critique of Politicisation, but I don't think that applies to some of the other topics the EA community has been involved (animals can't vote and I would argue this critique doesn't apply to trade liberalization as well, but this isn't the forum).