All of Daniel Kirmani's Comments + Replies

Is it morally permissible for Effective Altruists to travel (far) for pleasure?

I don't like this post. It feels like a step down a purity spiral. An Effective Altruist is anyone who wants to increase net utility, not one who has no other goals.

New Cause Area: Demographic Collapse

Curing aging also fixes the demographic collapse.

6Malcolm Collins1mo
What is your estimate on a timeline for a person of average income to afford said cure? What year do you estimate it would be available? (I ask because while I agree with you even basic medical care is not available to most people in the world right now. I suppose it depends on the mechanism of action of the aging cure - a viral vector might be inexpensive to produce in mass.) Note: Adding your suggestion to the document
Preventing a US-China war as a policy priority

TSMC, a Taiwanese firm, is currently the global semiconductor linchpin. What would be the implications of Chinese invasion for AGI timelines?

Edit: Kinda-answered here by Wei Dai, and in this very comment thread. My takeaways: Chinese invasion would push AI timelines into the future, but only a little. It would also disadvantage Chinese AI capabilities research relative to that of NATO.

2Simon Zhang1mo
Heh amazing braintwister! What, consequentially speaking, a Chinese invasion of Taiwan is (maximally) good, actually?
Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry]

Insects are more likely to be copies of each other and thus have less moral value.

There are two city-states, Heteropolis and Homograd, with equal populations, equal average happiness, equal average lifespan, and equal GDP.

Heteropolis is multi-ethnic, ideologically-diverse, and hosts a flourishing artistic community. Homograd's inhabitants belong to one ethnic group, and are thoroughly indoctrinated into the state ideology from infancy. Pursuits that aren't materially productive, such as the arts, are regarded as decadent in Homograd, and are therefore v... (read more)

2turchin2mo
If in Homograd everyone will be absolute copies of each other, the city will have much less moral value for me. If in Homograd there will be only two exact copies of two people, and all other people will be different, if would mean for me that its real population is N-1, so I will chose to nuke Homograd. But! I don't judge diversity here as aesthetic, but only as a chance that there will be more or less exact copies.
Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry]

While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical.

EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by

... (read more)
1JeffreyK2mo
true altruism vs. ulterior motive for social gain as you mention here, as well as legible vs. illegible above...I am less cynical than some people...I often receive from people only imagining they seek my good...and I do for others truly only seeking their good...usually...the side benefits that accrue occasionally in a community are an echo of goodness coming back to you...of course people have a spectrum of motivations, some seek the good, some the echo...but both are beneficial so who cares? Good doing shouldn't get hung up on motivations, they are trivial...I think they are mostly a personal internal transaction...you may be happier inside if you are less self seeking at your core...but we all have our needs and are evolving.
2turchin2mo
When I tell people that have reminded a driver to look at a pedestrian ahead and probably saved that pedestrian, people generally react negatively, saying something like the driver will see the pedestrian anyway eventually, but my crazy reaction could have distracted him. Also, once I almost pull a girl back to safety from a street where a SUV was ready to hit her - and she does't not even call my on my birthdays! So helping neighbours doesn't give status in my experience.
What are EA's biggest legible achievements in x-risk?

I might've slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.

The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN's Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage "rogue nations" (Iran) from developing nukes.

That b... (read more)

3Misha_Yagudin2mo
Thank you for you work!
3MichaelDickens2mo
I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.
2acylhalide2mo
Thanks for this anecdote! Given the scarcity of such successes, I think people here would be interested in hearing a longer form version of this. Just wished to suggest!
You Don't Need To Justify Everything

If you spend a lot of time in deep thought trying to reconcile "I did X, and I want to do Y" with the implicit assumption "I am a virtuous and pure-hearted person", then you're going to end up getting way better at generating prosocial excuses via motivated reasoning.

If, instead, you're willing to consider less-virtuous hypotheses, you might get a better model of your own actions. Such a hypothesis would be "I did X in order to impress my friends, and I chose career path Y in order to make my internal model of my parents proud".

Realizing such uncomfort... (read more)

3ThomasWoodside2mo
This is a good point which I don't think I considered enough. This post [https://forum.effectivealtruism.org/posts/4Xv4pB8izmYZCXXhj/] describes this somewhat. I do think the signal for which actions are best to take has to come from somewhere. You seem to be suggesting the signal can't come from the decisionmaker at all since people make decisions before thinking about them. I think that's possible, but I still think there's at least some component of people thinking clearly about their decision, even if what they're actually doing is trying to emulate what those around them would think. We do want to generate actual signal for what is best, and maybe we can do this somewhat by seriously thinking about things, even if there is certainly a component of motivated reasoning no matter what. If this estimate is based on social evaluations, won't the people making those evaluations have the same problem with motivated reasoning? It's not clear this is a better source of signal for which actions are best for individuals. If signal can never truly come from subjective evaluation, it seems like it wouldn't be solved by moving to social evaluation. One thing that would seem difficult would be concrete, measurable metrics, but this seems way harder in some fields than others.
You Don't Need To Justify Everything

Reminder that split-brain experiments indicate that the part of the brain that makes decisions is not the part of the brain that explains decisions. The evolutionary purpose of the brain's explaining-module is to generate plausible-sounding rationalizations for the brain's decision-modules' actions. These explanations also have to adhere to the social norms of the tribe, in order to avoid being shunned and starving.

Humans are literally built to generate prosocial-sounding rationalizations for their behavior. They rationalize things to themselves even when... (read more)

4ThomasWoodside2mo
Yes, people will always have motivated reasoning, for essentially every explanation of their actions they give. That being said, I expect it to be weaker for the small set of things people actually think about deeply, rather than things they're asked to explain after the fact that they didn't think about at all. Though I could be wrong about this expectation.
A Visit to the Idea Machine Fair

The books thing is a real problem. There's probably a lot of potential impact in translating the Sequences into YouTube video-essays.

1Guy Raveh2mo
People mention "The Sequences" all over - which sequences are they specifically referring to?
Just Say No to Utilitarianism

Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples.

This sounds a lot like "every hypothesis can be eventually falsified with evidence, therefore, trying to falsify hypotheses rules out every hypothesis. So we shouldn't try to falsify hypotheses." 

But we are Bayesians, are we not? If we are, we should update away from ethical principles when novel counterexamples are brought to our attention, with the magnitude of the update proportional to the unpleasantness of the counterexample.

4Gavin2mo
Agreed
Unflattering reasons why I'm attracted to EA

If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.

What're the costs/benefits of reversing this shame? By "reversing shame" I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.

Unflattering reasons why I'm attracted to EA

I made my account to upvote this. EA would do well to think more clearly about the practical nature of altruism and self-deception.