Geoffrey Miller

Psychology Professor @ University of New Mexico
Working (15+ years of experience)

Bio

Participation
3

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (3) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
128

Scope sensitivity, I guess, is the triumph of 'rational compassion' (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns. 

But this is an empirical question in human psychology, and I don't think there's much research on it yet. (I hope to do some in the next couple of years though).

Yanatan -- I like your homunculus-waking-up thought experiment. It might not resonate with all students, but everybody's seen The Matrix, so it'll probably resonate with many.

Kiu -- I agree. It reminds me of the old quote from Rabbi Nachman of Breslov (1772-1810): 

“If you won’t be better tomorrow than you were today, then what do you need tomorrow for?”

https://en.wikipedia.org/wiki/Nachman_of_Breslov

Jeff -- these examples, of whether to pass through puberty, and whether to become a parent, raise some profound issues (a la Derek Parfit) about the continuity of personal identity. They're basically about decisions about whether to become a new person, and they're basically irreversible. So, yes, it's very hard to know whether such a profound change is 'the right choice'... because it's a choice that basically extinguishes the person making the choice, and creates a new person who's stuck with the choice.

Which can sound very scary, or very liberating and transformative, depending on one's risk tolerance.

There's a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and  reproductive fitness -- but that doesn't mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.

'If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?'

or, another way to put this:

'I worry that if I learn more about animal welfare, global poverty, and existential risks, then all of my previous meat-eating, consumerist status-seeking, and political virtue-signaling will make me feel like a bad person'

(This is a common 'pain point' among students when I teach my 'Psychology of Effective Altruism' class)

'Why don't EA's main cause areas overlap at all with the issues that dominate current political debates and news media?'

(This could be an occasion to explain that politically controversial topics tend not to be (politically) tractable or neglected (in terms of media coverage), and are often limited in scope (i.e. focused on domestic political squabbles and symbolic virtue-signaling)

Yes, I think we're in agreement -- the Stuart Russell definition is much closer to my meaning (1) for 'intelligence' (ie a universal cognitive ability shared across individuals) than to my meaning (2) for 'intelligence' (i.e. the psychometric g factor).

The trouble comes mostly when the two are conflated, e.g. when we imagine that 'superintelligence' will basically be like an IQ 900 person (whatever that would mean), or when we confuse 'general intelligence' as indexed by the g factor with truly 'domain-general intelligence' that could help an agents do whatever it wants to achieve, in any domain, given any possible perceptual input.

There's a lot more to say about this issue; I should write a longer form post about it soon.

Vaidehi and Amber -- very helpful and insightful post, with good suggestions.

Another obstacle is that busy academics in EA-adjacent fields face several career incentives against forum posts -- especially if they're tenure-track, teaching big courses, or running big lab groups. 

Every hour we spend  writing EA Forum posts or comments is an hour that we're not writing a grant application, a journal paper, or a book. Those count for our tenure, promotion, and annual reviews. Forum posts don't really count for anything in our academic jobs.

How to overcome this? I'm not sure, but it might be good to brainstorm about how to lay down a smoother path from EA Forum posts to academic journal articles , e.g. for academics writing posts to flag them with something explicit like 'This is a rough draft of some ideas I might turn into a journal article for journal X or Y; I'd especially welcome feedback that helps with that goal'. 

Another option is to develop a couple of online academic journals called 'Effective Altruism' or 'Longtermism' or 'Existential Risk Review' or whatever, which would basically publish polished, referenced, peer-reviewed versions of EA posts. The article selection criteria and review process could be quite streamlined, but to most academics, if it looks like a journal, and has a journal-style website and submission procedure, and it's genuinely peer-reviewed to some reasonable degree, then it's considered a legit journal. Also, the editors of such journals could keep their eyes on which EA Forum posts look interesting, upvoted, and much commented-upon, and could invite the writers of those posts to revise their post into a contribution to the journal.

Basically, if I write a 9,000 word post for EA Forum, I can't list it on my academic CV, and it counts for absolutely nothing in an academic career. But if I publish exactly  the same post as a peer-reviewed article in an EA journal, it counts for a lot.

The downside is that formal EA academic journals would be a departure from the usual EA ethos of very fast, effective, interactive, community-wide discussion, because traditional journals involve a huge amount of wasted time and effort (non-public reviews, slow review times, slow publication times, journals behind paywalls, little opportunity for visible feedback appended to the articles, etc). So we'd need to develop some new models for 'peer-reviewed academic articles' that combine the best of EA Forum communication style with the career-building credibility of traditional journal articles. 

There are probably some other downsides to this suggestion, e.g. it would require some pretty smart and dedicated EAs to devote a fair amount of time to being journal editors and reviewers. However, we do get academic credit for doing those jobs!  And it would not be very expensive to top up an aspiring academic's pay with an editorship supplement. (I know lots of junior academics who would happily spend 30 hours of month editing a new journal if they could make an extra $30k a year doing so.)

Hi Charles, you seem to be putting a lot of weight on a short, quick note that I made as a comment on a comment on an EA Forum post, based on my personal experiences in an Econ department (I wasn't 'mentioning credentials', I was offering observations based on experience). 

(You also included some fairly vague criticisms of my previous posts and comments that could be construed as rather ad hominem.)

You are correct that there are many subfields within Econ, some of which challenge standard models, and that Econ has some virtues that other social sciences often don't have. Fair enough. 

The question remains: why is Econ largely ignoring the likely future impact of AI on the economy  (apart from some specific issues such as automation and technological unemployment), and treating most of the economy as likely to carry on more or less as it is today?

Matt and I offered some suggestions based on what we see as a few intellectual biases and blind spots in Econ. Do you have any other suggestions?

Load More