Bio

Participation
3

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (3) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
81

I'm not convinced quality has been declining, but I'm open to the possibility, and it's hard to judge.

Might be useful to ask EA Forum moderators if they can offer any data on  metrics across time (e.g. the last few years) such as

  1. overall number of EA Forum members
  2. number who participate at least once a month
  3. average ratio of upvotes to downvotes
  4. average number of upvoted comments per long post

We could also just run a short poll of EA Forum users to ask about perceptions of quality.

tldr: Another way to signal-boost this competition might be through prestige and not just money, by including some well-known people as judges, such as Elon Musk, Vitalik Buterin, or Steven Pinker.

One premise here is that big money prizes can be highly motivating, and can provoke a lot of attention, including from researchers/critics who might not normally take AI alignment very seriously. I agree.

But, if Future Fund really wants maximum excitement, appeal, and publicity (so that the maximum number of smart people work hard to write great stuff), then apart from the monetary prize, it might be helpful to maximize the prestige of the competition, e.g. by including a few 'STEM celebrities' as judges. 

For example, this could entail recruiting a few judges like tech billionaires Elon Musk, Jeff Bezos, Sergey Brin, Tim Cook, Ma Huateng, Ding Lei, or Jack Ma,  crypto leaders such as Vitalik Buterin or Charles Hoskinson, and/or well-known popular science writers, science fiction writers/directors, science-savvy political leaders, etc. And maybe, for an adversarial perspective, some well-known AI X-risk skeptics such as Steven Pinker, Gary Marcus, etc.

Since these folks are mostly not EAs or AI alignment experts, they shouldn't have a strong influence over who wins, but their perspectives might be valuable, and their involvement would create a lot of buzz around the competition. 

I guess the ideal 'STEM celebrity' judge would be very smart, rational, open-minded, and  highly respected among the kinds of people who could write good essays, but not necessarily super famous among the general public (so the competition doesn't get flooded by low-quality entries.) 

We should also try to maximize international appeal by including people well-known in China, India, Japan, etc. -- not just the usual EA centers in US, UK, EU, etc. 

(This could also be a good tactic for getting these 'STEM celebrity' judges more involved in EA, whether as donors, influencers, or engineers.)

This might be a very silly idea, but I just thought I'd throw it out there...

Strongly endorsed this comment. 

If we really take infohazards seriously, we shouldn't just be imagining EAs casually reading draft essays, sharing them, and the ideas gradually percolating out to potential bad actors. 

Instead, we should take a fully adversarial, red-team mind-set, and ask, if a large, highly capable geopolitical power wanted to mine EA insights for potential applications of AI technology that could give them an advantage (even at some risk to humanity in general), how would we keep that from happening?

We would be naive to think that intelligence agencies of various major countries that are interested in AI don't have at least a few intelligence analysts reading EA Forum, LessWrong, & Alignment Forum, looking for tips that might be useful -- but that we might consider infohazards.

This is a pretty deep and important point. There may be psychological and cultural biases that make it pretty hard to shift the expected likelihoods of worst-case AI scenarios much higher than they already are -- which might bias the essay contest against arguments winning even if they make a logically compelling case for more likely catastrophes.

Maybe one way to reframe this is to consider the prediction “P(misalignment x-risk|AGI)” to also be contingent on us muddling along at the current level of AI alignment effort, without significant increases in funding, talent, insights, or breakthroughs. In other words, probability of very bad things happening, given AGI happening, but also given the status-quo level of effort on AI safety.

I'm partly sympathetic to the idea of allowing submissions in other forums or formats.

However, I think it's likely to be very valuable to the Future Fund and the prize judges, when sorting through potentially hundreds or thousands of submissions, to be able to see upvotes, comments, and criticisms from EA Forum, Less Wrong, and Alignment Forum, which is where many of the subject matter experts hang out. This will make it easier to identify essays that seem to get a lot of people excited, and that don't contain obvious flaws or oversights.

Yep, fair enough. I was trying to dramatize the most vehement anti-censorship sentiments in a US political context, from one side of the partisan spectrum. But you're right that there are plenty of other anti-censorship concerns from many sides, on many issues, in many countries.

These are helpful suggestions; thanks. 

They seem aimed mostly at young adults starting their careers -- which is fine, but limited to that age-bracket.

It might also be helpful for someone who's an AI alignment expert to suggest some ways for mid-career or late-career researchers from other fields to learn more. That can be easier in some ways, harder in others -- we come to AI safety with our own 'insider view' of our field, and those may entail very different foundational assumptions about human nature, human values, cognition, safety, likely X risks, etc. So, rather than learning from scratch, we may have to 'unlearn what we have learned' to some degree first. 

For example, apart from young adults often starting with the same few bad ideas about AI alignment, established researchers from particular fields might often start with their own distinctive bad ideas about AI alignment -- but those might be quite field-dependent. For example, psych professors like me might have different failure modes in learning about AI safety than economics professors, or moral philosophy professors.

It's hard to judge whether this bill will go anywhere (I hope it does!); it seems to have gotten very little press coverage.

If we can't get a strong bipartisan consensus on reducing GCRs, then our governance system is broken.

This is a very good post that identifies a big PR problem for AI safety research. 

Your key takeaway might be somewhat buried in the last half of the essay, so let's see if I draw out the point more vividly (and maybe hyperbolically):

Tens (hundreds?) of millions of centrist, conservative, and libertarian people around the world don't trust Big Tech censorship because it's politically biased in favor of the Left, and it exemplifies a 'codding culture' that treats everyone as neurotic snowflakes, and that treats offensive language as a form of 'literal violence'.  Such people see that a lot of these lefty, coddling Big Tech values have soaked into AI research, e.g. the moral panic about 'algorithmic bias', and the increased emphasis on 'diversity, equity, and inclusion' rhetoric in AI conferences.

This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they're doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship,  to embed woke values into AI systems, and to create new methods for mass-customized propaganda). 

I agree that AI alignment research that is focused on global, longtermist issues such as X risk should be careful to distance itself from 'AI safety' research that focuses on more transient, culture-bound, politically partisan issues, such as censoring 'offensive' images and ideas. 

And, if we want to make benevolent AI censorship a new cause area for EA to pursue, we should be extremely careful about the political PR problems that would raise for our movement.

This is a helpful comment; thanks. 

I'm also somewhat skeptical about whether Chainlink & other oracle protocols can really maximize reliability of data through their economic incentive models, but at least they seem to be taking the game theory issues somewhat seriously. 

But then, I'm also very skeptical about the reliability of a lot of real-world data from institutions that also have incentives to misrepresent, overlook, or censor certain kinds of information. (with Google search results being a prime example)

I take your point about the difficulty of scaling any kind of data reliability checks that rely on a human judgment bottleneck, and the important role that AIs might play in helping with that.

Thanks for the suggestion about looking at data poisoning attacks!

Load More