Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3) mate choice, relationships, families , pronatalism, and population ethics as cause areas.
I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.
David - you make some excellent points here. I agree that being agreeable vs. disagreeable might be largely orthogonal to playing the 'inside game' vs. the 'outside game'. (Except that highly disagreeable people trying to play the inside game might get ostracized from inside-game organizations, e.g. fired from OpenAI.)
From my evolutionary psychology perspective, if agreeableness always worked for influencing others, we'd have all evolved to be highly agreeable; if disagreeableness always worked, we'd all have evolved to be highly disagreeable. The basic fact that people differ in the Big Five trait of Agreeableness (we psychologists tend to capitalize well-established personality traits) suggests that, at the trait level, there are mixed costs and benefits for being at any point along the Agreeableness spectrum. And of course, at the situation level, there are also mixed costs and benefits for pursuing agreeable vs. disagreeable strategies in any particular social context.
So, I think there are valid roles for people to use a variety of persuasion and influence tactics when doing advocacy work, and playing the outside game. On X/Twitter for example, I tend to be pretty disagreeable when I'm arguing with the 'e/acc' folks who dismiss AI safety concerns - partly because they often use highly disagreeable rhetoric when criticizing 'AI Doomers' like me. But I tend to be more agreeable when trying to persuade people I consider more open-minded, rational, and well-informed.
I guess EAs can do some self-reflection about their own personality traits and preferred social interaction styles, and adopt advocacy tactics that are the best fit, given who they are.
Zvi - FWIW, your refutation of the winning essay on AI, interest rates, and the efficient market hypothesis (EMH) seemed very compelling, and I'm surprised that essay was taken seriously by the judges.
Global capital markets don't even seem to have any idea how to value crypto protocols that might be moderately disruptive to fiat currencies and traditional finance institutions. Some traders think about these assets (or securities, or commodities, or whatever the SEC thinks they are, this week), but most don't pay any attention to them. And even if most traders thought hard about crypto, there's so much regulatory uncertainty about how they'll end up being handled that it's not even clear how traders could 'price in' issues such as how soon Gary Gensler will be replaced at the SEC.
Artificial Superintelligence seems vastly more disruptive than crypto, and much less salient (at least until this year) to most asset managers, bankers, traders, regulators, etc.
Jason - thanks for the news about the winning essays.
If appropriate, I would appreciate any reactions the judges had to my essay about a moral backlash against the AI industry slowing progress towards AGI. I'm working on refining the argument, so any feedback would be useful (even if only communicated privately, e.g. by email).
Parental effort can spur career effort.
I started working a lot harder after having my first kid (in 1996), to gain the financial and career security needed to raise a family.
For years before that, I'd procrastinated about turning my PhD into a popular science book. Then when baby arrived, I knew I'd have to make enough for a down-payment on a house, so I quickly secured a much better book deal, and finished writing the book pretty quickly thereafter (this was 'The Mating Mind', in 2000 -- which ironically was about mating effort rather than parenting effort.)
Many such cases. Of course, parenting takes a lot of time. But it can motivate people to allocate more time and energy from leisure activities (eg watching TV, social media, gaming) towards career activities.
I generally agree that clear, specific conflicts of interest are a bigger problem for relationships that mix professional and sexual roles, than vague 'power differentials' (which could include virtually any differences between partners in their wealth, status, prestige, fame, age, influence, intelligence, citation count, job seniority, etc.)
OK, that sounds somewhat plausible, in the abstract.
But what would be your proposal to slow down and reduce extinction risk from AI development? Or do you think that risk is so low that it's not worth trying to manage it?
Tom - you raise some fascinating issues, and your Venn diagrams, however impressionistic they might be, are useful visualizations.
I do hope that AI safety remains an important part of EA -- not least because I think there is some important, under-explored overlap between AI safety and the other key cause areas, global health & development, and animal welfare.
For example, I'm working on an essay about animal welfare implications of AGI. Ideally, advanced AI wouldn't just be 'aligned' with human interests, but with the interests of the other 70,000 species of sentient vertebrates (and the sentient invertebrates). But very little has been written about this so far. So, AI safety has a serious anthropocentrism bias that needs challenging. The EAs who have worked on animal welfare could have a lot to say about AI safety issues in relation to other species.
Likewise, the 'e/acc' cult (which dismisses AI safety concerns, and advocates AGI development ASAP), often argues that there's a moral imperative to develop AGI, in order to promote global health and development (e.g. 'solving longevity' and 'promoting economic growth'). EA people who have worked on global health and development could contribute a lot to the debate over whether AGI is strictly necessary to promote longevity and prosperity.
So, the Venn diagrams need to overlap even more!
This is a great idea, and I look forward to reading the diverse views on the wisdom of an AI pause.
I do hope that the authors contributing to this discussion take seriously the idea that an 'AI pause' doesn't need to be fully formalizable at a political, legal, or regulatory level. Rather, its main power can come from promoting an informal social consensus about the serious risks of AGI development, among the general public, journalists, politicians, and the more responsible people in the AI industry.
In other words, the 'Pause AI' campaign might get most of its actual power and influence from helping to morally stigmatize reckless AI development, as I argued here.
Thus, the people who argue that pausing AI isn't feasible, or realistic, or legal, or practical, may be missing the point. 'Pause AI' can function as a Schelling point, or focal point, or coordination mechanism, or whatever you want to call it, with respect to public discourse about the ethics of AI development.
There are universal human psychological adaptations associated with moral disgust, so it's not that hard for 'moral disgust' to explain broad moral consensus across very different cultures. For example, murder and rape within societies are almost always considered morally disgusting, across cultures, according to the anthropological research.
It's not that big a stretch to imagine that a global consensus could be developed that leverages these moral disgust instincts to stigmatize reckless AI development. As I argued here.
Holly - this is an excellent and thought-provoking piece, and I agree with most of it. I hope more people in EA and AI Safety take it seriously.
I might just add one point of emphasis: changing public opinion isn't just useful for smoothing the way towards effective regulation, or pressuring AI companies to change their behavior at the corporate policy level, or raising money for AI safety work.
Changing public opinion can have a much more direct impact in putting social pressure on anybody involved in AI research, AI funding, AI management, and AI regulation. This was a key point in my 2023 EA Forum essay on moral stigmatization of AI, and the potential benefits of promoting a moral backlash against the AI industry. Given strong enough public opinion for an AI Pause, or against runaway AGI development, the public can put direct pressure on people involved in AI to take AI safety more seriously, e.g. by socially, sexually, financially, or professionally stigmatizing reckless AI developers.