All of Daniel Kirmani's Comments + Replies

Re: "fear that falling birth rates [...] collapse of civilization."

No, this is not one of the things that scares me. Also, birth rates decline predictably once a nation is developed, so if this were a significant concern, it would end up hitting China and India just as hard as it is currently hitting the US and Europe.

Re: "worry that the overlap [...] could ultimately disappear."

No. Adoption of Progressive ideology is a memetic phenomenon, with mild to no genetic influence. (Update, 2023-04-03: I don't endorse this claim, actually. I also don't endo... (read more)

1
JasMaguire
1y
Your claim that political ideology is not heritable is false https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4038932/#:~:text=Almost forty years ago%2C evidence,be explained by genetic influences.
5
pseudonym
1y
Then it sounds like your idea of pronatalism and the Collinses idea of pronatalism looks quite different-if the article was written about the set of views you've expressed, I probably wouldn't be sharing it.

Hi! I strongly endorse pronatalism, and I will readily admit to wanting to reduce x-risk in order to keep my family safe.

Great! I also want to reduce x-risk to keep my family safe. But do you also strongly endorse the claims listed in the article that are attributed to pronatalism, and do you consider yourself an EA / a longtermist?

i.e.
"fear that falling birth rates in certain developed countries like the United States and most of Europe will lead to the extinction of cultures, the breakdown of economies, and, ultimately, the collapse of civilization."

"worry that the overlap between the types of people deciding not to have children with the part of the population that values... (read more)

What is "Effective Altruism" effective with respect to?

I'd be curious to know why people downvoted this.

Strengthening the association between "rationalist" and "furry" decreases the probability that AI research organizations will adopt AI safety proposals proposed by "rationalists".

6
Emrik
1y
Strengthening the association may enable a larger slice of the rationalists to think and communicate clearly without being bogged down by professional constraints. I suspect professionalism is much more lethal than I think most people think, so that might be a crux. If we lighten the pressure towards professionalism, people have more slack and are less likely to end up optimising for proxies such as impressiveness, technicality, relevancy-to-other-literature, "comprehensiveness", "hard work", etc.

The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc.

Social havoc isn't bad by default. It's possible that a publicity campaign would result in regulations that choke the life out of AI capabilities progress, just like the FDA choked the life out of biomedical innovation.

As Wei Dai mentioned, tribes in the EEA weren't particularly fond of other tribes. Why should people's ingroup-compassion scale up, but their outgroup-contempt shouldn't? Your argument supports both conclusions.

1
Ezra Newman
2y
This is a good point, I guess.

“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent.

This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can't
especially care about i... (read more)

1
mlsbt
2y
When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small.

Your argument proves too much:

  • My sex drive evolved before condoms existed. I should extend it to my new circumstances by reproducing as much as possible.
  • My subconscious bias against those who don't look like me evolved before there was a globalized economy with opportunities for positive-sum trade. Therefore, I should generalize to my new circumstances by beco
... (read more)
1
Ezra Newman
2y
I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition. Specifically: * Have as much sex as you want (with a consenting adult, etc). Have as many children as you can reasonably care for. But even if you disagree with that, I don’t think this is a good counterexample. It’s not a conflict between small scale beliefs and large scale beliefs.  * This is new information, not a small-large conflict.  * Same as above. 

I don't like this post. It feels like a step down a purity spiral. An Effective Altruist is anyone who wants to increase net utility, not one who has no other goals.

Curing aging also fixes the demographic collapse.

8
Malcolm Collins
2y
What is your estimate on a timeline for a person of average income to afford said cure? What year do you estimate it would be available?  (I ask because while I agree with you even basic medical care is not available to most people in the world right now. I suppose it depends on the mechanism of action of the aging cure - a viral vector might be inexpensive to produce in mass.) Note: Adding your suggestion to the document 

TSMC, a Taiwanese firm, is currently the global semiconductor linchpin. What would be the implications of Chinese invasion for AGI timelines?

Edit: Kinda-answered here by Wei Dai, and in this very comment thread. My takeaways: Chinese invasion would push AI timelines into the future, but only a little. It would also disadvantage Chinese AI capabilities research relative to that of NATO.

3
Simon Zhang
2y
Heh amazing braintwister! What, consequentially speaking, a Chinese invasion of Taiwan is (maximally) good, actually?

Insects are more likely to be copies of each other and thus have less moral value.

There are two city-states, Heteropolis and Homograd, with equal populations, equal average happiness, equal average lifespan, and equal GDP.

Heteropolis is multi-ethnic, ideologically-diverse, and hosts a flourishing artistic community. Homograd's inhabitants belong to one ethnic group, and are thoroughly indoctrinated into the state ideology from infancy. Pursuits that aren't materially productive, such as the arts, are regarded as decadent in Homograd, and are therefore v... (read more)

2
turchin
2y
If in  Homograd everyone will be absolute copies of each other, the city will have much less moral value for me. If in Homograd there will be only two exact copies of two people, and all other people will be different, if would mean for me that its real population is N-1, so I will chose to nuke Homograd. But! I don't judge diversity here as aesthetic, but only as a chance that there will be more or less exact copies.

While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical.

EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by

... (read more)
1
Jeffrey Kursonis
2y
true altruism vs. ulterior motive for social gain as you mention here, as well as legible vs. illegible above...I am less cynical than some people...I often receive from people only imagining they seek my good...and I do for others truly only seeking their good...usually...the side benefits that accrue occasionally in a community are an echo of goodness coming back to you...of course people have a spectrum of motivations, some seek the good, some the echo...but both are beneficial so who cares?  Good doing shouldn't get hung up on motivations, they are trivial...I think they are mostly a personal internal transaction...you may be happier inside if you are less self seeking at your core...but we all have our needs and are evolving. 
2
turchin
2y
When I tell people that have reminded a driver to look at a pedestrian ahead and probably saved that pedestrian, people generally react negatively, saying something like the driver will see the pedestrian anyway eventually, but my crazy reaction could have distracted him.  Also, once I almost pull a girl back to safety from a street where a SUV was ready to hit her - and she does't not even call my on my birthdays! So helping neighbours doesn't give  status in my experience.

I might've slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.

The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN's Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage "rogue nations" (Iran) from developing nukes.

That b... (read more)

3
Misha_Yagudin
2y
Thank you for you work!
3
MichaelDickens
2y
I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.

If you spend a lot of time in deep thought trying to reconcile "I did X, and I want to do Y" with the implicit assumption "I am a virtuous and pure-hearted person", then you're going to end up getting way better at generating prosocial excuses via motivated reasoning.

If, instead, you're willing to consider less-virtuous hypotheses, you might get a better model of your own actions. Such a hypothesis would be "I did X in order to impress my friends, and I chose career path Y in order to make my internal model of my parents proud".

Realizing such uncomfort... (read more)

3
ThomasW
2y
This is a good point which I don't think I considered enough. This post describes this somewhat. I do think the signal for which actions are best to take has to come from somewhere. You seem to be suggesting the signal can't come from the decisionmaker at all since people make decisions before thinking about them. I think that's possible, but I still think there's at least some component of people thinking clearly about their decision, even if what they're actually doing is trying to emulate what those around them would think. We do want to generate actual signal for what is best, and maybe we can do this somewhat by seriously thinking about things, even if there is certainly a component of motivated reasoning no matter what. If this estimate is based on social evaluations, won't the people making those evaluations have the same problem with motivated reasoning? It's not clear this is a better source of signal for which actions are best for individuals. If signal can never truly come from subjective evaluation, it seems like it wouldn't be solved by moving to social evaluation. One thing that would seem difficult would be concrete, measurable metrics, but this seems way harder in some fields than others.

Reminder that split-brain experiments indicate that the part of the brain that makes decisions is not the part of the brain that explains decisions. The evolutionary purpose of the brain's explaining-module is to generate plausible-sounding rationalizations for the brain's decision-modules' actions. These explanations also have to adhere to the social norms of the tribe, in order to avoid being shunned and starving.

Humans are literally built to generate prosocial-sounding rationalizations for their behavior. They rationalize things to themselves even when... (read more)

4
ThomasW
2y
Yes, people will always have motivated reasoning, for essentially every explanation of their actions they give. That being said, I expect it to be weaker for the small set of things people actually think about deeply, rather than things they're asked to explain after the fact that they didn't think about at all. Though I could be wrong about this expectation.

The books thing is a real problem. There's probably a lot of potential impact in translating the Sequences into YouTube video-essays.

1
Guy Raveh
2y
People mention "The Sequences" all over - which sequences are they specifically referring to?

Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples.

This sounds a lot like "every hypothesis can be eventually falsified with evidence, therefore, trying to falsify hypotheses rules out every hypothesis. So we shouldn't try to falsify hypotheses." 

But we are Bayesians, are we not? If we are, we should update away from ethical principles when novel counterexamples are brought to our attention, with the magnitude of the update proportional to the unpleasantness of the counterexample.

4
Gavin
2y
Agreed

If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.

What're the costs/benefits of reversing this shame? By "reversing shame" I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.

I made my account to upvote this. EA would do well to think more clearly about the practical nature of altruism and self-deception.