Daniel Kirmani

208Joined Jun 2022

Comments
22

Re: "fear that falling birth rates [...] collapse of civilization."

No, this is not one of the things that scares me. Also, birth rates decline predictably once a nation is developed, so if this were a significant concern, it would end up hitting China and India just as hard as it is currently hitting the US and Europe.

Re: "worry that the overlap [...] could ultimately disappear."

No. Adoption of Progressive ideology is a memetic phenomenon, with mild to no genetic influence.

Do you think focusing on birth rates in "Western Civilization" is a good way of creating 'intergenerationally, durable cultures that will lead to our species being a diverse, thriving, innovative interplanetary empire one day that isn't at risk from, you know, a single asteroid strike or a single huge disease?', and do you think it's something that longtermists should focus on?

I guess this intervention would be better than nothing, strictly speaking. The mechanism of action here is "people have kids" -> {"people feel like they have a stake in the future", "people want to protect their descendants"} -> "people become more aligned with longtermism". I don't think this is a particularly effective intervention.

Do you consider yourself a longtermist?

Yes.

Do you consider yourself an EA?

Eh, maybe.

Hi! I strongly endorse pronatalism, and I will readily admit to wanting to reduce x-risk in order to keep my family safe.

What is "Effective Altruism" effective with respect to?

I'd be curious to know why people downvoted this.

Strengthening the association between "rationalist" and "furry" decreases the probability that AI research organizations will adopt AI safety proposals proposed by "rationalists".

The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc.

Social havoc isn't bad by default. It's possible that a publicity campaign would result in regulations that choke the life out of AI capabilities progress, just like the FDA choked the life out of biomedical innovation.

As Wei Dai mentioned, tribes in the EEA weren't particularly fond of other tribes. Why should people's ingroup-compassion scale up, but their outgroup-contempt shouldn't? Your argument supports both conclusions.

“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent.

This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can't
especially care about individual people while simultaneously caring about everyone equally. You just can't. "Logically consistent" means that you don't claim to do both of these mutually exclusive things at once.

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small.

Your argument proves too much:

  • My sex drive evolved before condoms existed. I should extend it to my new circumstances by reproducing as much as possible.
  • My subconscious bias against those who don't look like me evolved before there was a globalized economy with opportunities for positive-sum trade. Therefore, I should generalize to my new circumstances by becoming a neonazi.
  • My love of sweet foods evolved before mechanized agriculture. Therefore, I should extend my default behavior to my modern circumstances by drinking as much high-fructose corn syrup as I can.

I don't like this post. It feels like a step down a purity spiral. An Effective Altruist is anyone who wants to increase net utility, not one who has no other goals.

Curing aging also fixes the demographic collapse.

Load More