Daniel Kirmani

Topic Contributions

Comments

Is it morally permissible for Effective Altruists to travel (far) for pleasure?

I don't like this post. It feels like a step down a purity spiral. An Effective Altruist is anyone who wants to increase net utility, not one who has no other goals.

New Cause Area: Demographic Collapse

Curing aging also fixes the demographic collapse.

Preventing a US-China war as a policy priority

TSMC, a Taiwanese firm, is currently the global semiconductor linchpin. What would be the implications of Chinese invasion for AGI timelines?

Edit: Kinda-answered here by Wei Dai, and in this very comment thread. My takeaways: Chinese invasion would push AI timelines into the future, but only a little. It would also disadvantage Chinese AI capabilities research relative to that of NATO.

Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry]

Insects are more likely to be copies of each other and thus have less moral value.

There are two city-states, Heteropolis and Homograd, with equal populations, equal average happiness, equal average lifespan, and equal GDP.

Heteropolis is multi-ethnic, ideologically-diverse, and hosts a flourishing artistic community. Homograd's inhabitants belong to one ethnic group, and are thoroughly indoctrinated into the state ideology from infancy. Pursuits that aren't materially productive, such as the arts, are regarded as decadent in Homograd, and are therefore virtually nonexistent.

Two questions for you:

  • Would it be more ethical to nuke Homograd than to nuke Heteropolis?
  • Imagine a trolley problem, with varying numbers of Homograders and Heteropolites tied to each track. Find a ratio that renders you indifferent as to which path the trolley takes. What is the moral exchange rate between Homograders and Heteropolites?
Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry]

While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical.

EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by taxes and so on.

Generating legible utility is far more costly than generating illegible utility, because people compete to generate legible utility in order to jockey for status. If your goal is to generate utility, to hell with status, then the utility you generate will likely be illegible.

But sometimes helping a neighbour is cheaper than helping a distant person, because we have unique knowledge and opportunities in our inner circle.

If you help your neighbor, he is likely to feel grateful, elevating your status in the local community. Additionally, he would be more likely to help you out if you were ever down on your luck. I'm sure that nobody would ever try to rationalize this ulterior motive under the guise of altruism.

What are EA's biggest legible achievements in x-risk?

I might've slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.

The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN's Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage "rogue nations" (Iran) from developing nukes.

That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war. Also, it's not clear that my performance on my contribution to the contract actually increased the strength of the deterrent to Iran. However, if (a descendant of) my model ends up being used by NATO, perhaps I helped out by decreasing the chance of a false positive.

Disclaimer: This was before I had ever heard of EA. Still, I've always been somewhat EA-minded, so maybe you can attribute this to proto-EA reasoning. When I was working on the project, I remember telling myself that even a very small reduction in the odds of a nuclear war happening meant a lot for the future of mankind.

You Don't Need To Justify Everything

If you spend a lot of time in deep thought trying to reconcile "I did X, and I want to do Y" with the implicit assumption "I am a virtuous and pure-hearted person", then you're going to end up getting way better at generating prosocial excuses via motivated reasoning.

If, instead, you're willing to consider less-virtuous hypotheses, you might get a better model of your own actions. Such a hypothesis would be "I did X in order to impress my friends, and I chose career path Y in order to make my internal model of my parents proud".

Realizing such uncomfortable truths bruises the ego, but can also bear fruit. For example: If a lot of EAs' real reason for working on what they do is to impress others, then this fact can be leveraged to generate more utility. A leaderboard on the forum, ranking users by (some EA organization's estimate of) their personal impact could give rise to a whole bunch of QALYs.

You Don't Need To Justify Everything

Reminder that split-brain experiments indicate that the part of the brain that makes decisions is not the part of the brain that explains decisions. The evolutionary purpose of the brain's explaining-module is to generate plausible-sounding rationalizations for the brain's decision-modules' actions. These explanations also have to adhere to the social norms of the tribe, in order to avoid being shunned and starving.

Humans are literally built to generate prosocial-sounding rationalizations for their behavior. They rationalize things to themselves even when they are not being interrogated, possibly because it's best to pre-compute and cache rationalizations that one is likely to need later. It has been postulated that this is the reason that people have internal monologues, or indeed, the reason that humans evolved big brains in the first place.

We were built to do motivated reasoning, so it's not a bad habit that you can simply drop after reading the right blog post. Instead, it's a fundamental flaw in our thought-processes, and must always be consciously corrected. Anytime you say "I did X because Y" without thinking about it, you are likely dead wrong.

The only way to figure out why you did anything is through empirical investigation of your past behavior (revealed preferences). This is not easy, it risks exposing your less-virtuous motivations, and almost nobody does it, so you will seem weird and untrustworthy if you always respond to "Why did you do X?" with "I don't know, let me think". People will instinctively want to trust and befriend the guy who always has a prosocial rationalization on the tip of his tongue. Honesty is hard.

Load More