Geoffrey Miller

Psychology Professor @ University of New Mexico
6031 karmaJoined Jan 2017Working (15+ years)Albuquerque, NM, USA



Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.


'Micro-aggressions' aren't a thing. Psychology research debunked this strange, unempirical, activist concept years ago, e.g. this paper. The fact that the DEI industry continues to promote the concept of 'microaggressions' shows that they can profit from it, not that it is empirically grounded.

As for sexual advances being 'unique to EA', that would be news to any biologist who studies courtship in any of the 60,000 species of sexually reproducing vertebrates.


In hunter-gatherers, trust is generally established under conditions of scarcity, when people develop networks of social reciprocity for food-sharing, child-care, sickenss-care, and catastrophe-avoidance. Only when people face starvation, illness, disasters, or warfare can they learn who they can really trust. And the trust functions mostly for risk-pooling.

By contrast, only under conditions of local abundance - e.g. unusually high-productivity hunter-gatherer environments (e.g. pacific northwest salmon & shellfish areas), or farming with agricultural surpluses -- do we see a lot of top-down hierarchical coercion, with persistent inequality, divisions of labor, despotism, harems, large-scale warfare, etc.

SWK - thanks for sharing this. Good to see Santos talking about this. I think his message was somewhat undermined by three factors, which you alluded to: (1) commencement speeches are usually optimistic, uplifting platitude-dumps written to be as inoffensive as possible, (2) the Doomsday Clock just doesn't have much credibility or gravitas any more, since its meaning has been diluted from the original 'global nuclear war' focus to all sorts of other concerns, (3) mentioning climate change as if it's an 'existential risk' either doesn't understand what 'existential risk' means, or vastly over-estimates the likely severity of climate change.

Amber - thanks for sharing these candid reactions to EAG. I suspect they're fairly common reactions among many attendees.

I would just contextualize your reactions by pointing out that these reactions are very common in many young people attending conferences in any scientific field! I don't think most of them are unique to EA. Most are generic to almost any kind of intellectually focused conference.

When I was in grad school and in my early academic career, I attended a wide variety of conferences in cognitive science, machine learning, genetics, decision theory, evolutionary psychology, primatology, etc. Across all these fields, many young researchers had similar reactions to the conferences, concerning the reactions you mentioned -- high career stakes, imposter syndrome, confronting one's limitations, everything being busy and frantic, having some bad interactions, etc. The social challenges of conferences are especially acute for people (like me) with Aspergers, introversion, and/or social awkwardness.

So I think EAs need to be careful about a couple of things.

First, it's important for EAs to attend a variety of non-EA conferences, so we realize that a lot of the challenges of EA conferences aren't unique to EA, but are just generic to what happens when you put hundreds of smart, motivated, ambitious young adults together in the same space and time, jostling for status, opportunities, recognition, and connections.

Second, it's important for EAs not to over-correct our conference structures in reaction to these fairly common 'conference blues'. If there were easy ways to make conferences less demanding, stressful, and exhausting, other sciences would probably have already discovered and implemented them. EAs aren't likely to solve conference-planning problems that have eluded the best other sciences for decades. (Maybe we can, but I'm being Bayesian here.)

Having said that, EA conference do have one unusual challenge, as you mentioned: 'you spend a lot of time talking about depressing things'. That really is a unique part of EA. Many behavioral sciences conferences do spend a lot of time talking about 'social problems' that loom large within current political narratives, such as prejudice, discrimination, stereotyping, inequality, etc. But none of these are existential risks, so it's easy to act like they're Very Important Problems Indeed during the conference talks (e.g. at a typical social psychology conference), but to set them aside during evening socializing (since everybody knows they're not actually massive risks to our entire species and civilization.) By contrast, EA deliberately seek out large-scale, neglected, tractable problems, and this can impose unique emotional challenges during conferences. 

I think if EAs are concerned about making EA conferences more pleasant and rewarding, we should focus a fair amount of attention on this last issue - how to stay positive, social, and motivated even when the intellectual and emotional content of EA talks and discussion is uniquely alarming, and/or uniquely likely to induce 'empathy fatique'. 

Chi - good points. 

Having attended over 100 science conference in my 30+ years of academia, I've realized that attending live talks has some hidden advantages over just 'catching up later with the videos', or 'just doing 1-to-1 conversations'. 

First, if a high enough proportion of people at a conference attend any given talk, they all have something to react to with friends, to discuss at social gatherings,, and to serve as an ice-breaker when meeting strangers (e.g. 'Hey, what did you think of that talk by X about Y?'). This works best for plenary talks where there's only one talk happening at a time, so everybody's coordinated on that topic as something important to consider.

Second, live talks induce a level of collective emotional engagement, a kind of mass hypnosis, or a tribal ritualistic mind-set, that heightens the affective impact of the talk. This might be as 'efficient' at a strictly cognitive level as watching the talk later at 1.75x speed. But it can help the ideas sink deeper into one's heart and brain, as it were. 

Third, attending talks incentivizes speakers to do a good job of prepping their talk, clarifying their ideas, simplifying their data, and polishing their narrative. If nobody shows up, it's disheartening. If lots of people show up, it's very encouraging -- and it sets up expectations that one must do one's best in future talks at the same conference. Obviously there's a game-theoretic problem here that people can do 'social loafing' or 'free-riding' on the talk attendance of others, without paying the costs oneself. But this can be offset if a research community has strong social norms that, if you attend a conference, you really should be attending lots of live talks. People can notice who's in the audience, pulling their weight, encouraging the speakers to excel.

Fourth, great live talks can be memorable events that can reinforce one's intellectual and ethical identity, and make one feel connected to an ongoing tradition of ideas, and to the 'life of the mind' in general. Decades later, I can remember seeing live conference talks by inspiring thinkers like Peter Singer, Richard Dawkins, Daniel Dennett, Steven Pinker, Leda Cosmides, William D. Hamilton, Margo Wilson, Geoffrey Hinton, etc. They're among the highlights of my intellectual life. And they're motivating in a way that reading their books isn't. Conversely, there have been plenty of talks by major thinkers and up-and-coming researchers that I failed to attend, and that I'll regret not seeing. 

So, in a very narrow, cognitivist sense, it might seem more 'efficient' to watch conference talks later, as if they're no different than  any other youtube video or podcast. But that misses out on at least four - and probably many more -- hidden benefits of actually being there in person -- socially engaged, part of the tribe, incentivizing the speaker, and setting up precious memories that can keep inspiring one for decades afterwards.

Jason -- great example. A lot of it's in the framing!

Yonatan- thanks for sharing. It's a very good clip and makes good points. Respect to Naftali Bennett.

I found myself wishing that our American political leaders were this smart, and this articulate about understanding new technologies' risks.

Lenny - thanks for sharing this info.

China isn't in the G7, but it is in the G20. (Ditto for India)

Hopefully whatever the G-7 does with regard to AI policy will be considered more broadly by the G20, so that China and Indian, the world's two most populous countries, are included. Rather than any G-7 initiatives being perceived as just the US and the West bullying other countries into AI submission.

Larks - yes, it is hard for any organization that has a strong political leaning to develop more self-awareness about that leaning, to understand why it might be create some problems in cause area assessment and movement-building, and to develop a realistic strategy for outreach and reform that tries to balance out its partisanship.

I guess one strategy might be to frame this as a matter of understanding barriers to achieving more widespread adoption of EA values and priorities. The kinds of objections and concerns that conservatives might have about animal welfare, global public health, global poverty, and X risks might be quite different from the objections and concerns that liberals might typically have. 

For example, in terms of geopolitics, conservatives might often have a more positive-sum view of economic growth, but a more zero-sum view of nation-state rivalries (and a more negative view of 'global coordination' through institutions such as the United Nations). This might lead to a view of global poverty issues that prioritize promoting the rule of law, efficient markets, and entrepreneurship in poor countries, rather than reallocation of existing resources (eg direct cash transfers). It might also lead to more concern about arms races between nation-states with regard to X risks (e.g. AI, nuclear weapons), and to a profound skepticism about the effectiveness of government regulation or global coordination.

Thus, if conservatives hear EAs making arguments about these issues, without understanding the conservative mind-set at all, they might be turned off from EA as talent, as donors, and as advocates -- when they might have actually contributed significantly.

OK! Thanks for the explanation.

Load more