Hi! I strongly endorse pronatalism, and I will readily admit to wanting to reduce x-risk in order to keep my family safe.
Great! I also want to reduce x-risk to keep my family safe. But do you also strongly endorse the claims listed in the article that are attributed to pronatalism, and do you consider yourself an EA / a longtermist?
i.e.
"fear that falling birth rates in certain developed countries like the United States and most of Europe will lead to the extinction of cultures, the breakdown of economies, and, ultimately, the collapse of civilization."
"worry that the overlap between the types of people deciding not to have children with the part of the population that values...
I'd be curious to know why people downvoted this.
Strengthening the association between "rationalist" and "furry" decreases the probability that AI research organizations will adopt AI safety proposals proposed by "rationalists".
The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc.
Social havoc isn't bad by default. It's possible that a publicity campaign would result in regulations that choke the life out of AI capabilities progress, just like the FDA choked the life out of biomedical innovation.
As Wei Dai mentioned, tribes in the EEA weren't particularly fond of other tribes. Why should people's ingroup-compassion scale up, but their outgroup-contempt shouldn't? Your argument supports both conclusions.
“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent.
This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can't
especially care about i...
I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small.
Your argument proves too much:
I don't like this post. It feels like a step down a purity spiral. An Effective Altruist is anyone who wants to increase net utility, not one who has no other goals.
TSMC, a Taiwanese firm, is currently the global semiconductor linchpin. What would be the implications of Chinese invasion for AGI timelines?
Edit: Kinda-answered here by Wei Dai, and in this very comment thread. My takeaways: Chinese invasion would push AI timelines into the future, but only a little. It would also disadvantage Chinese AI capabilities research relative to that of NATO.
Insects are more likely to be copies of each other and thus have less moral value.
There are two city-states, Heteropolis and Homograd, with equal populations, equal average happiness, equal average lifespan, and equal GDP.
Heteropolis is multi-ethnic, ideologically-diverse, and hosts a flourishing artistic community. Homograd's inhabitants belong to one ethnic group, and are thoroughly indoctrinated into the state ideology from infancy. Pursuits that aren't materially productive, such as the arts, are regarded as decadent in Homograd, and are therefore v...
While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical.
...EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by
I might've slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.
The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN's Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage "rogue nations" (Iran) from developing nukes.
That b...
If you spend a lot of time in deep thought trying to reconcile "I did X, and I want to do Y" with the implicit assumption "I am a virtuous and pure-hearted person", then you're going to end up getting way better at generating prosocial excuses via motivated reasoning.
If, instead, you're willing to consider less-virtuous hypotheses, you might get a better model of your own actions. Such a hypothesis would be "I did X in order to impress my friends, and I chose career path Y in order to make my internal model of my parents proud".
Realizing such uncomfort...
Reminder that split-brain experiments indicate that the part of the brain that makes decisions is not the part of the brain that explains decisions. The evolutionary purpose of the brain's explaining-module is to generate plausible-sounding rationalizations for the brain's decision-modules' actions. These explanations also have to adhere to the social norms of the tribe, in order to avoid being shunned and starving.
Humans are literally built to generate prosocial-sounding rationalizations for their behavior. They rationalize things to themselves even when...
The books thing is a real problem. There's probably a lot of potential impact in translating the Sequences into YouTube video-essays.
Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples.
This sounds a lot like "every hypothesis can be eventually falsified with evidence, therefore, trying to falsify hypotheses rules out every hypothesis. So we shouldn't try to falsify hypotheses."
But we are Bayesians, are we not? If we are, we should update away from ethical principles when novel counterexamples are brought to our attention, with the magnitude of the update proportional to the unpleasantness of the counterexample.
If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.
What're the costs/benefits of reversing this shame? By "reversing shame" I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.
I made my account to upvote this. EA would do well to think more clearly about the practical nature of altruism and self-deception.
No, this is not one of the things that scares me. Also, birth rates decline predictably once a nation is developed, so if this were a significant concern, it would end up hitting China and India just as hard as it is currently hitting the US and Europe.
No. Adoption of Progressive ideology is a memetic phenomenon, with mild to no genetic influence. (Update, 2023-04-03: I don't endorse this claim, actually. I also don't endo... (read more)