All of Question Mark's Comments + Replies

Alignment being solved at all would require alignment being solvable with human-level intelligence. Even though IQ-augmented humans wouldn't be "superintelligent", they would have additional intelligence that they could use to solve alignment. Additionally, it probably takes more intelligence to build an aligned superintelligence than it does to create a random superintelligence. Without alignment, chances are that the first superintelligence to exist will be whatever superintelligence is the easiest to build.

1
Pato
2y
I don't agree with the first statement neither understand what are you arguing for or against.

These aren't exactly memes, but here are a few images I generated in Craiyon involving EA-related topics.

6
MichaelDickens
2y
What I'm getting is we need to either clean up trash, or move Louisiana into southern Texas.
3
Nathan Young
2y
Hmmm. Thought provoking

Suffering risks have the potential to be far, far worse than the risk of extinction.  Negative utilitarians and EFILists may also argue that human extinction and biosphere destruction may be a good thing or at least morally neutral, since a world with no life would have a complete absence of suffering. Whether to prioritize extinction risk depends on the expected value of the far future. If the expected value of the far future is close to zero, it could be argued that improving the quality of the far future in the event we survive is more important th... (read more)

A P-zombie universe could be considered a good thing if one is a negative utilitarian. If a universe lacks any conscious experience, it would not contain any suffering.

1
tobytrem
2y
Thanks for flagging- yes I have definitely taken more of the MacAskill-Ord-Greaves party line in this post. Personally, I'm pretty uncertain on total utilitarianism so this should reflect that a little more. 

A lot of people will probably dismiss this due to it being written by a domestic terrorist, but Ted Kaczynski's book Anti-Tech Revolution: Why and How is worth reading. He goes into detail on why he thinks the technological system will destroy itself, and why he thinks it's impossible for society to be subject to rational control. He goes into detail on the nature of chaotic systems and self-propagating systems, and he heavily criticizes individuals like Ray Kurzweil. Robin Hanson critiqued Kaczynski's collapse theory a few years ago on Overcoming Bias. It... (read more)

I suspect there's a good chance that populations in Western nations could be significantly higher than predicted according to your link. The reason for this is that we should expect natural selection to select for whatever traits maximize fertility in the modern environment, such as higher religiosity. This will likely lead to fertility rates rebounding in the next several generations. The sorts of people who aren't reproducing in the modern environment are being weeded out of the gene pool, and we are likely undergoing selection pressure for "breeders" wi... (read more)

3
Max_Daniel
2y
I think this is a relevant consideration, but murkier than it appears at first glance.

Have you tried any tryptamine research chemicals like 4-HO-MET or 4-HO-MiPT? If so, have they had any noticeable effect on your depression?

Do you know of any estimates of the impact of more funding for AI safety? For instance, how much would an additional $1,000 increase the odds of the AI control problem being solved?

7
Zach Stein-Perlman
2y
I don't know of particular estimates. I do know that different (smart, reasonable, well-informed) people would give very different answers -- at least one would even say that the marginal AI safety researcher has negative expected value. Personally, I'm optimistic that even if you're skeptical of AI safety research in general, you can get positive expected value by (as a lower bound) doing something like giving particular researchers whose judgment you trust money to support researchers they think are promising. My guess is that the typical AI-concerned community leader would say at least a one in 10 billion chance for $1,000.

Here's a chart of the amount of suffering caused by different animal foods that Brian Tomasik created. Farmed fish may have even more negative utility than chicken, since they are small and therefore require more animals per unit of meat. The chart is based on suffering per unit of edible food produced rather than suffering throughout the total population, and I'm not sure what the population of farmed fish is relative to the population of chickens. Chicken probably has more negative utility than fish if the chicken population is substantially higher than ... (read more)

2
Vasco Grilo
2y
Thanks for commenting! I am aware of that article, but you have just nudged me to make the calculation. Based on the Weighted Animal Welfare Index of Charity Entrepeneurship: * The "total welfare score" (WS) is: * Lower than 100 for humans (as that is the defined maximum). * -44 for "FF fish – traditional aquaculture". * -56 for "FF broiler chicken". * The "estimated population size" (P) is: * 1 T for "FF fish – traditional aquaculture". * 22 G for "FF broiler chicken". * Consequently, the total welfare (= WS*P) is: * 44 T for "FF fish – traditional aquaculture". * 1.2 T for "FF broiler chicken". * Lower than 0.80 T (= 100 * 8.0 G) for humans. This suggests the negative utility of FF fish is: * 40 times (= 44/1.232) as large as that of FF chickens. * More than 60 times (= 44/0.80) as large as that of humans.

Vegetarians/vegans should consider promoting eating only beef/dairy as the only animal products they consume as a potential strategy to have people cause less suffering to livestock with a high retention rate. I suspect that the average person would be much more willing to give up most animal products while still consuming beef and dairy, compared to giving up meat entirely. Since cows are big, fewer animals are needed to produce a single unit of meat, compared to meat coming from smaller animals. Vitalik Buterin has argued that eating big animals as an an... (read more)

3
Guy Raveh
2y
While I'm sympathetic to the idea that of supporting people becoming not entirely vegan - I think it pays off, impact wise - I find it hard to believe that telling them which specific animals to eat is going to be worth the effort. 1. How sure are we of any choice of numbers in that calculation? Maybe a cow is morally worth 1 chicken, maybe 10,000. 2. Different people like different foods. I suspect their choice for "the one product that if I keep eating, will allow me to be otherwise vegan" would be much more broadly distributed. 3. So this seems both like it takes extra effort and might miss a lot of people, and on the other hand the marginal gain from it is not that big.

... which arguably gives circumcised males the benefit of longer sex ;-)

Not necessarily. Male circumcision may actually cause premature ejaculation in some men.

More seriously: FGM can cause severe bleeding and problems urinating, and later cysts, infections, as well as complications in childbirth and increased risk of newborn deaths (WHO).

Other than complications in childbirth, male circumcision can also cause all of these complications. According to Ayaan Hirsi Ali, who is herself a victim of FGM, boys being circumcised in Africa have a higher risk of com... (read more)

In the same vein, comparing female genital mutilation to forced circumcision is... let's say ignorant of the effects of FGM.

This lecture by Eric Clopper has a decent analysis of the differences between male circumcision and FGM. Male circumcision removes more erogenous tissue and more nerve endings than most forms of FGM.

6
Noga Aharony
2y
Not to say that I am against this cause but this is a false equivalency between physiology and pleasure. What ratio of total erogenous tissue is removed in each procedure, and what impact does it have on ability to achieve an orgasm, as well as pleasure? Is it still higher for male circumcision relative to FGM along these measurements?
-34
Sjlver
2y

While it's true that women are more likely to be victims of sexual violence, men are more likely to be victims of non-sexual violence, such as murder and aggravated assault.

Murder is not a global top-10 cause of death or suffering.  Sexual abuse could very much be a global top-10 cause of suffering based on Akhil's post. 

3
DukeGartzea
2y
Yes, men are more likely to be victims of non-sexual violence, but you are omitting a fact of vital relevance in all this, resulting in a biased opinion.  While the majority of men murdered are at the hands of other men who are strangers to them, the percentage of women who are killed by their partner or their family is around 50% every year on a global scale (1) (2) Also is the fact that "while men are more likely than women to be victims of homicide, they are even more likely to be the perpetrators." Recognizing the gender disparities in terms of understanding the kind of violence that occurs is key to ending it. We must not only look at the victims but also at who the aggressors are, and seek a solution always taking into account the power dynamics that are embedded in the construction of gender and its oppression and discrimination.

How does this compare to violence against men and boys as a cause area? Worldwide, 78.7% of homicide victims are men. Female genital mutilation is also generally recognized as being a human rights violation, while forced circumcision of boys is still extremely prevalent worldwide. For various social reasons, violence against males seems to be a more neglected cause area compared to violence against females.

Lizka
2yModerator Comment64
2
0

Hey everyone, the moderators want to point out that this topic is heated for several reasons:

  • abuse/violence is already a topic people understandably have strong feelings about
  • the discussion in this thread got into comparing two populations and asking which of them has it worse, which might make people feel like the issues are being trivialized or dismissed. I think it might be best to evaluate the issues separately and see if they are promising as cause areas (e.g. via the ITN framework).

We want to ask everyone to be especially careful when discussing topics this sensitive. 

-26
Sjlver
2y

How's this argument different from saying, for example, that we can't rule out God's existence so we should take him into consideration? Or that we can't rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one?

If you want to reduce the risk of going to some form of hell as much as possible, you ought to determine what sorts of “hells” have the highest probability of existing, and to what extent avoiding said hells is tractable. As far as I can tell, the “hells” that seem to be the most realistic are hells ... (read more)

the scope is surely not infinite. The heat death of the universe and the finite number of atoms in it pose a limit.

We can't say for certain that travel to other universes is impossible, so we can't rule it out as a theoretical possibility. As for the heat death if the universe, Alexey Turchin created this chart of theoretical ways that the heat death of the universe could be survivable by our descendants.

Unless you think unaligned AIs will somehow be inclined to not only ignore what people want, but actually keep them alive and torture them - which sounds

... (read more)
0
Guy Raveh
2y
How's this argument different from saying, for example, that we can't rule out God's existence so we should take him into consideration? Or that we can't rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one? The linked post is basically a definition of what "survival" means, without any argument on how any of it is at all plausible. I believe neither is plausible by mistake.
4
Frank_R
2y
It should be mentioned that all (or at least most) ideas to survive the heat death of the universe involve speculative physics. Moreover, you have to deal with infinities. If everyone is suffering but there is one sentient being that experiences a happy moment every million years, does this mean that there is an infinite amount of suffering and an infite amount of happiness and the future is of neutral value? If any future with an infinite amount of suffering is bad, does this mean that it is good if sentient life does not exists forever? There is no obvious answer to these questions.  

Suffering risks. S-risks are arguably a far more serious issue than reducing the risk of extinction, as the scope of the suffering could be infinite. The fact that there is a risk of a maligned superintelligence creating a hellish dystopia on a cosmic scale with more intense suffering than has ever existed in history means that even if the risk of this happening is small, this is balanced by its extreme disutility. S-risks are also highly neglected, relative to their potential extreme disutility. It could even be argues that it would be rational to complet... (read more)

5
MichaelStJules
2y
FWIW, infinities could go either way if you recognize moral goods that can aggregate by summing. I think where infinities seem more likely for suffering than goods are if your views are ethically asymmetric and assign more weight to suffering, especially some kinds of suffering being infinitely bad, but no goods being infinitely good (or no goods at all), or goods only being able to offset but not outweigh bads.
6
Frank_R
2y
Other S-risks that may or may not sound more plausible are suffering simulations (maybe an AI comes to the conclusion that a good way to study humans is to simulate earth at the time of the Black Death) or suffering subroutines (maybe reinforcement learners that are able to suffer enable faster or more efficient algorithms). 
1
Guy Raveh
2y
To preface my criticism I'll say I think concrete ways that AI may cause great suffering do deserve attention. But: 1. the scope is surely not infinite. The heat death of the universe and the finite number of atoms in it pose a limit. 2. Unless you think unaligned AIs will somehow be inclined to not only ignore what people want, but actually keep them alive and torture them - which sounds implausible to me - how's this not Pascal's mugging?

80,000 Hours has this list of what they consider to be the most pressing world problems, and this list ranking different cause areas by importance, tractability, and uncrowdedness. As for lists of specific organizations, Nuño Sempere created this list of longtermist organizations and evaluations of them, and I also found this AI alignment literature review and charity comparison. Brian Tomasik also wrote this list of charities evaluated from a suffering-reduction perspective.

2
MAXIMUM
2y
Thank you! I was actually looking for collaborative databases, where people actively add/learn stuff related to those issues, but it seems like it doesn't exist yet. The reason why I asked this is because charities are still ultimately need to make profit and usually do not solve problems systemically/from the root of the issue.

Brian Tomasik's essay "Why I Don't Focus on the Hedonistic Imperative" is worth reading. Since biological life will almost certainly be phased out in the long run and be replaced with machine intelligence, AI safety probably has far more longtermist impact compared to biotech-related suffering reduction. Still, it could be argued that having a better understanding of valence and consciousness could make future AIs safer.

An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.

3
Anthony Fleming
2y
Excellent point. Playing devil's advocate, one might be skeptical that humanity is good enough to perform these "cosmic rescue missions", either out of cruelty/indifference or simply because we will never be advanced enough. Still, it's a good concept to keep in mind.

5. Argument from Deep Ecology

    This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.

This seems inconsistent with anti-natalism and negative utilitarianism. If we ought to focus on preventing suffering, why shouldn't a... (read more)

6
Anthony Fleming
2y
Good point, some of these arguments do contradict one another. I suppose if human extinction really were a good thing, it would be because of one or a few of these arguments, not all of them.

Even if the Symmetry Theory of Valence turns out to be completely wrong, that doesn't mean that QRI will fail to gain any useful insight into the inner mechanics of consciousness. Andrew Zuckerman sent me this comment previously on QRI's pathway to impact, in response to Nuño Sempere's criticisms of QRI. The expected value of QRI's research may therefore have a very high degree of variance. It's possible that their research will amount to almost nothing, but it's also possible that their research could turn out to have a large impact. As far as I know, the... (read more)

The way I presented the problem also fails to account for the fact that it seems quite possible  there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.

Anatoly Karlin's Katechon Hypothesis is one Fermi Paradox hypothesis that is similar to what you are describing. The basic idea is that if we live in a simulation, the simulation may have computational limits. Once advanced c... (read more)

If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.

 Andrés Gómez Emilsson discusses this sort of thing in this video. The fact that our position in history may be uniquely positioned to influence the far future may be strong evidence that we liv... (read more)

Even though there are some EA-aligned organizations that have plenty of funding, not all EA organizations are that well funded. You should consider donating to the causes within EA that are the most neglected, such as cause prioritization research. The Center for Reducing Suffering, for example, has only received £82,864.99 GBP in total funding as of late 2021. The Qualia Research Institute is another EA-aligned organization that is funding-constrained, and believes it could put significantly more funding to good use.

mic
2y19
0
0

The Qualia Research Institute might be funding-constrained but it's questionable whether it's doing good work; for example, see this comment here about its Symmetry Theory of Valence.

This isn't specifically AI alignment-related, but I found this playlist on defending utilitarian ethics. It discusses things like utility monsters and the torture vs. dust specks thought experiment, and is still somewhat relevant to effective altruism.

2
jacquesthibs
2y
Saving for potential future use. Thanks!

My concern for reducing S-risks is based largely on self-interest. There was this LessWrong post on the implications of worse than death scenarios. As long as there is a >0% chance of eternal oblivion being false and there being a risk of experiencing something resembling eternal hell, it seems rational to try to avert this risk, simply because of its extreme disutility. If Open Individualism turns out to be the correct theory of personal identity, there is a convergence between self-interest and altruism, because I am everyone.

The dilemma is that it do

... (read more)

This partially falls under cognitive enhancement, but what about other forms of consciousness research besides increasing intelligence, such as what QRI is doing? Hedonic set-point enhancement, i.e. making the brain more suffering-resistant and research into creating David Pearce's idea of "biohappiness", is arguably just as important as intelligence enhancement. Having a better understanding of valence could also potentially make future AIs safer. Magnus Vinding also wrote this post on personality traits that may be desirable from an effective altruist pe... (read more)

2
Leo
2y
Thanks for raising this point. I agree that such category could include enhancements not strictly limited to "being smarter".  I think this is a legitimate cause area, but I'm not sure if I would include Magnus's excellent post. I just don't feel he is proposing this as a cause area. . . Anyway, the real reason I didn't include it was far more trivial: It was published in April and this update is supposed to cover up to March. I'm thinking about ways of extending the limit and keeping this up to date on a regular basis.

Regarding the risk of Effective Evil, I found this article regarding ways to reduce the threat of malevolent actors creating these sorts of dsasters.

There was this post that is a list of EA-related organizations. The org update tag also has a list of EA organizations. Nuño Sempere also wrote this list of evaluations of various longtermist EA organizations. As for specific individuals, Wikipedia has a category for people associated with Effective Altruism.

1
david_reinstein
2y
Some good content there, but I was looking specifically for researchers/academics for this particular case.

Which leads to the question of how we can get more people to produce promising work in AI safety. There are plenty of highly intelligent people out there who are capable of doing work in AI safety, yet almost none of them do. Maybe trying to popularize AI safety would help to indirectly contribute to it, since it might help to convince geniuses with the potential to work in AI safety to start working on it. It could also be an incentive problem. Maybe potential AI safety researchers think they can make more money by working in other fields, or maybe there ... (read more)

It depends on what you mean by "neglected", since neglect is a spectrum. It's a lot less neglected than it was in the past, but it's still neglected compared to, say, cancer research or climate change. In terms of public opinion, the average person probably has little understanding of AI safety. I've encountered plenty of people saying things like "AI will never be a threat because AI can only do what it's programmed to do" and variants thereof.

What is neglected within AI safety is suffering-focused AI safety for preventing S-risks. Most AI safety research... (read more)

8
Steven Byrnes
2y
I disagree, I think if AGI safety researchers cared exclusively about s-risk, their research output would look substantially the same as it does today, e.g. see here and discussion thread. Ambitious value learning and CEV are not a particularly large share of what AGI safety researchers are working on on a day-to-day basis, AFAICT. And insofar as researchers are thinking about those things, a lot of that work is trying to figure out whether those things are good ideas the first place, e.g. whether they would lead to religious hell.

A major reason why support for eugenically raising IQs through gene editing is low in Western countries could be a backlash against Nazism, since Nazism is associated with eugenics in the mind of the average person. The low level of support in East Asia is more uncertain. One possible explanation is that East Asians have a risk-averse culture.

Interestingly, Hindus and Buddhists also have some of the highest rates of support for evolution among any religious groups. There was a poll from 2009 that showed that 80% of Hindus and 81% of Buddhists in the United... (read more)

As a side note, I found this poll of public opinion of gene editing in different countries. India apparently has the highest rate of social acceptance of using gene editing to increase intelligence of any of the countries surveyed. This could have significant geopolitical implications, since the first country or countries to practice gene editing for higher intelligence could have an enormous first-mover advantage. Whatever countries start practicing gene editing for higher intelligence will have far more geniuses per capita, which will greatly increase le... (read more)

7
Ryan Beck
2y
That's really interesting, thanks! I wonder why India is so supportive of it in comparison to other countries.

What's the point of extending an infant's life by a single day? If the infant in question has some sort of terminal illness that will inevitably cause them to die in infancy, prolonging their life by a single day seems extremely cruel. It would do nothing but prolong the infant's suffering.

2
ethankennerly
2y
Mark, you're right, I had no intention if prolonging a miserable life. I intended to ask about extending an infant's healthy and pleasant life by one day.

There's also the psychedelics in problem-solving experiment. The experiment involved having groups engineers solve engineering problems while on psychedelics in order to see if the psychedelics would enhance their performance. 

I already posted this in the post about EAG sessions about AI, but I'm reposting it since I think it's extremely important.

What is the topic of the session?

Suffering risks, also known as S-risks

Who would you like to give the session?

Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.

What is the format of the talk?

The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in A... (read more)

What is the topic of the talk?

Suffering risks, also known as S-risks

Who would you like to give the talk?

Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.

What is the format of the talk?

The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in AI alignment, and suffering-spreading space colonization. They would then discuss possible strategies for reducing S-risks, and so... (read more)

Brian Tomasik wrote something similar about the risks of slightly misaligned artificial intelligence, although it is focused on suffering risks specifically rather than on existential risks in general.

2
Gavin
2y
I want a word which covers {x-risk, s-risk}, "Existential or worse".

Two Russians I know of who are affiliated with Effective Altruism are Alexey Turchin and Anatoly Karlin. You may want to try to contact them to see if you can convince them to emigrate. Alexey Turchin's email is available on his website and can be messaged on Reddit, and Anatoly Karlin can be contacted via email, Reddit, Twitter, Discord, and Substack.

8
RyanCarey
2y
Anatoly Karlin is a pro-war Russian nationalist - it makes no sense to encourage him to leave.

Your epistemic maps seem like a useful idea, since it would make it easier to visualize the most important cause areas for where we should push. Alexey Turchin created a number of roadmaps related to existential risks and AI safety, which seem similar to what you're talking about creating. You should consider making an epistemic map of S-risks, or risks of astronomical suffering.  Tobias Baumann and Brian Tomasik have written a number of articles on S-risks, which might help you get started. I also found this LessWrong article on worse than death scen... (read more)

1
Harrison Durland
2y
Thanks for the suggestion and links, I'll be looking further into those! Is there some kind of specific question within the S-risk literature that you think would be good to focus on?

This article series on the Age of Malthusian Industrialism may provide some insight on what the next dark age might realistically look like. One possible way an upcoming dark age could be averted is through radical IQ augmentation via gene editing/embryo selection.

One animal welfare strategy EAs should consider promoting in the short term is getting meat eaters to eat meat from larger animals instead of smaller ones, i.e. beef instead of chicken and fish. With larger animals, it takes fewer animals to produce a unit of meat compared to smaller animals. Vitalik Buterin has argued that doing this may be 99% as good as veganism. Brian Tomasik compiled this chart of the amount of direct suffering that is caused by consuming various animal products, and beef and dairy are at the bottom.  For lacto-ovo vegetarians, t... (read more)

4
Fai
2y
I actually told some people to do this kind of diet. Even though I feel very uncertain about it.    I was always baffled by the fact that in Asia, when a lot of people speak of "cutting meat consumption", they start by cutting the meat of cows When I tried to convince them that they should do the reverse, they look extremely surprised. It's kind of a cultural thing here that cutting cow's meat first is seen as standard, everyone kind of "knows it has to be the case".

I did a reverse image search on it, and I found a map that seems to have the same data for France and Germany that was posted in early 2014.

3
isabel
2y
Oh, good idea. Reverse image searching resulted in me finding the same version claiming that it's 2007 data. So that's maybe partially reflecting differences in how people responded to the financial crisis in particular?  Decided to find some TFR data from Eurostat and recreate this map for some more recent years. The France-Germany gap has been decreasing in visual saliency: 2014 is still pretty visible but but 2019 is less so (though there is still some  aggregate TFR difference  between France and Germany). Data doesn't go far enough back for me to be able to check the original map but it doesn't seem particularly implausible.    2014 TFR - Eurostat2019 TFR - Eurostat

On the topic of the Amish, I found this article "Assortative Mating, Class, and Caste". In the article, Henry Harpending and Gregory Cochran argue that the Amish are undergoing selection pressure for increased "Amishness" which is essentially truncation selection.  The Amish have a practice known as "Rumspringa" in which Amish young adults get to experience the outside world, and some fraction of Amish youths choose to leave the Amish community and join the outside world every generation. The defection rate among the Amish has been decreasing over tim... (read more)

3
isabel
2y
I think your comment does a really good job of illustrating the difficulty in determining which groups and circumstances are selecting on what traits, as the two examples of unusually strong selection on fertility that you bring up are the Amish and the French, which have been on opposite ends of fertility behavior. It's not impossible that both of these groups are selecting more strongly on fertility  than everyone else, but it is somewhat counterintuitive.  I agree that the Amish are selecting on something, but that something isn't necessarily a preference to have more children. The paper you linked also lists "affinity for work, perseverance, low status competition, respect for authority, conscientiousness, and community orientation" as other characteristics that may be being selected for among the Amish.  If the Amish are being selected for ~conformity and community orientation rather than desire to have more kids irrespective of circumstances, then if the circumstances change at the community level (for example, if it becomes more difficult to purchase farmland, as is already happening to the community in Lancaster County, or if the Amish stop being exempt from the requirement that children need to stay in school until they are 16, which some people are pushing for) then the Amish fertility rate could decline further than it already has. The French case seems somewhat more compelling: because of contraception and norms around family sizes, the people who had larger families in France would be people who intrinsically valued larger families, and so selection would in the direction of higher fertility preferences more directly, rather than high fertility being a result of conforming to local norms. That being said, two centuries of selection in the direction of people who want kids more than average hasn't been enough to bring French fertility above replacement, merely to above average for Europe.  Do you know which year the map from Breeder's Revenge that you

I found this Facebook group "Effective Altruism Memes with Post-Darwinian Themes".

These aren't entirely EA-related, but I also found this subreddit with memes related to transhumanism.

3
Ramiro
2y
I'm very pleased w the meme on pleasure

Here are two comics about utilitarianism.

Happy by SMBC
Fate of Humanity by Merryweather Comics
2
Ramiro
2y
C'mon, SMBC is hors-concours
Load more