Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3) mate choice, relationships, families , pronatalism, and population ethics as cause areas.
I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.
Pablo - thanks for sharing the link to this excellent essay.
Apart from being a great intro to EA for people made semi-skeptical by the FTX and OpenAI situations, Scott Alexander's writing style is also worthy of broader emulation among EAs. It's light, crisp, clear, funny, fact-based -- it's everything that EA writing for the public should be.
What exactly are you referring to when you mention 'miniature cults of personality that lead to extremely serious bad outcomes'?
Do you mean actually 'extremely serious bad outcomes', in the scope-sensitive sense, like millions of people dying?
Jamie - thanks for sharing these helpful data.
Do you have any impressions about the role of social media in raising EA awareness among teens?
Or any thoughts about the potential of using the main social media platforms that teens tend to use (eg TikTok, Intagram, Snapchat, and YouTube shorts), as distinct from those that older adults tend to use (eg Twitter/X, Facebook).
On the one hand, we can stereotype TikTok users as shallow, superficial, unlikely to be EA material, etc; on the other hand, I can imagine a lot of key EA ideas could actually be conveyed very quickly and clearly in TikTok or YouTube short videos.
Julia - I appreciate this initiative, and just want to add a caveat.
I think with any policies and procedures for 'reporting concerns' or whistleblowing, it's important, as in any 'signal detection problem', to balance the risks and costs of false positives (e.g. false accusations, slander from disgruntled or mentally ill employees) against the risks and costs of false negatives (missing bad behavior or bad organizations).
My impression is that EA has suffered some important and salient false negatives (e.g. missing SBF's apparent sociopathy & FTX frauds). But some EA individuals and organizations, arguably, have also been subject to a wide range of false allegations -- especially by certain individuals who have a very long history of false allegations against many former associates and former employers.
It can be very easy to be taken in by a plausible, distressed, emotionally intense whistleblower - especially if one has little professional experience of handling HR-type disputes, or little training in relevant behavioral sciences (e.g. psychiatry, clinical psychology). This is an especially acute danger if the whistleblower has any of the Cluster B personality disorders (antisocial, narcissistic, borderline, histrionic disorders) that tend to be associated with multi-year histories of false allegations against multiple targets.
And these problems may be exacerbated if there are financial incentives for making false allegations (e.g. 'financial support for people reporting problems'), without many social or professional costs of doing so (e.g. if the false allegations are made from behind a cloak of anonymity, and their falseness is never reported to the EA community).
Thus, I would urge any EAs who set themselves up as adjudicators of whistleblowing cases to get some serious training in recognizing some of the red flags that may indicate false allegations -- especially in assessing any patterns of persistent false accusations, mental illness, or personality disorders.
It only takes one or two people with serious borderline personality disorder (for example), who are willing to make multiple false allegations, to ruin the reputations of multiple individuals and organizations -- especially if the people trying to investigate those allegations are too naive about what might be going on. The same caveat applies to any EAs who take it upon themselves to do any independent 'investigative reporting' of allegations against individuals or organizations.
Stan - those are legitimate concerns, that there might be some circularity in judging general intelligence in relation to understanding EA concepts, in a classroom context.
I do have a pretty good sense of my university undergrads' overall intelligence distribution from teaching many other classes on many topics over the last 23 years, and knowing the SAT and ACT distributions of the undergrads.
Within each class, I guess I'm judging overall intelligence mostly from participation in class discussions and online discussion forums, and term paper proposals, revisions, and final drafts.
As I mentioned, it would be nice to have some more quantitative, representative data on how IQ predicts capacity to understand EA concepts -- and whether having certain other traits (e.g. Aspy-style thinking, Openness, etc) might add some more predictive validity over and above IQ.
PS I should add that, when I taught EA concepts to my undergrads at Chinese University of Hong Kong - Shenzhen (CUHK-SZ) (c. 2020-2021), which is a much more cognitively selective university that the American state university where I usually teach, the Chinese undergrads had a much easier time understanding the EA ideas. Despite having much lower familiarity with other aspects of the Euro-American culture, charity system, Rationalism subculture, etc.
So I take this as (weak but suggestive) evidence that cognitive ability is a major driver of ability to understand EA principles.
Also, of course, if EA principles were easy to develop and master among ordinary people, EA principles would probably have been developed and mastered much earlier historically.
Stan - this is a legitimate and interesting question. I don't know of good, representtive, quantitative data that's directly relevant.
However, I can share some experiences from teaching EA content that might be illuminating, and semi-relevant. I've taught my 'Psychology of Effective Altruism' course (syllabus here), four times at a large American state university where the students show a very broad range of cognitive ability. This is an upper-level undergraduate seminar restricted mostly to juniors and seniors. I'd estimate the IQ range of the students taking the course to be about 100-140, with a mean around 115.
In my experience, the vast majority of the students really struggle with central EA concepts and rationality concepts like scope-sensitivity, neglectedness, tractability, steelmanning, recognizing and avoiding cognitive biases, and decoupling in general.
I try very hard to find readings and videos that explain all of these concepts as simply and clearly as possible. Many students kinda sorta get some glimpses into what it's like to see the world through EA eyes. But very few of them can really master EA thinking to a level that would allow them to contribute significantly to the EA mission.
I would estimate that out of the 80 or so students who have taken my EA classes, only about 3-5 of them would really be competitive for EA research jobs, or good at doing EA public outreach. Most of those students probably have IQs above about 135. So this is mostly a matter of raw general intelligence (IQ), and partly a matter of personality traits such as Openness and Conscientiousness, and partly a matter of capacity for Aspy-style hyper-rationality and decoupling.
So, my impression from years of teaching EA to a wide distribution of students is that EA concepts are just intrinsically really, really difficult for ordinary human minds to understand, and that only a small percentage of people have the ability to really master them in an EA-useful way. So, cognitive elitism is mostly warranted for EA.
Having said that, I do think that EAs may under-estimate how many really bright people are out there in non-elitist institutions, jobs, and cities. The really elite universities are incredibly tiny in terms of student numbers. There might be more really smart people at large, high-quality state universities like U. Texas Austin (41,000 undergrads) or U. Michigan (33,000 undergrads) than there are at Harvard (7,000 undergrads) or Columbia (9,000 undergrads). Similar reasoning might apply in other countries. So, it would seem reasonable for EAs to consider broadening our search for EA-capable talent beyond super-elite institutions and 'cool' cities and tech careers, into other places where very smart people might be found.
Well from an AI safety viewpoint, the very worst teams to be leading the AGI rush would be those that (1) are very competent, well-funded, well-run, and full of idealistic talent, and (2) don't actually care about reducing extinction risk -- however much lip service they pay to AI safety.
From that perspective, OpenAI is the worst team, and they're in the lead.
Jamie - yes, I was thinking mostly about general outreach and EA education, rather than paid ads. I could imagine a series of short videos for TikTok explaining some basic EA concepts and insights, for example.