Hide table of contents

I'm prepping a new upper-level undergraduate/graduate seminar on 'AI and Psychology', which I'm aiming to start teaching in Jan 2025. I'd appreciate any suggestions that people might have for readings and videos that address the overlap of current AI research (both capabilities and safety) and psychology (e.g. cognitive science, moral psychology, public opinion). The course will have a heavy emphasis on the psychology, politics, and policy issues around AI safety, and will focus more on AGI and ASI than on narrow AI systems. Content that focuses on the challenges of aligning AI systems with diverse human values, goals, ideologies, and cultures would be especially valuable. Ideal readings/videos would be short, clear, relatively non-technical, recent, and aligned with an EA perspective. Thanks in advance! 

32

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

I was recommended Perplexity for looking for course materials.

You can search academic databases, as well as perform broad searches on the web or YouTube.

Provide context like ChatGPT does. For your purpose, mention that you are building a course on artificial intelligence and psychology and give details about it.

Thanks! Appreciate the suggestion.

This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there. 

This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117 

For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai

Abby - good suggestions, thank you. I think I will assign some Robert Miles videos! And I'll think about the human value datasets.

A few quick ideas:
1. On the methods side, I find the potential use of LLMs/AI as research participants in psychology studies interesting (not necessarily related to safety). This may sound ridiculous at first but I think the studies are really interesting.
From my post on studying AI-nuclear integration with methods from psychology: 

[Using] LLMs as participants in a survey experiment, something that is seeing growing interest in the social sciences (see Manning, Zhu, & Horton, 2024; Argyle et al., 2023; Dillion et al., 2023; Grossmann et al., 2023).

2. You may be interested or get good ideas from the Large Language Model Psychology research agenda (safety-focused). I haven't gone into it so this is not an endorsement.

3. Then you have comparative analyses of human and LLM behavior. E.g. the Human vs. Machine paper (Lamparth, 2024) compares humans and LLMs' decision-making in a wargame. I do something similar with a nuclear decision-making simulation, but it's not in paper/preprint form yet.

Helpful suggestions, thank you! Will check them out.

Comments1
Sorted by Click to highlight new comments since:

This sounds very interesting and closely aligns with my personal long-term career goals. Would the seminar content will be made available online for those looking to complete the course remotely or is this purely in-person?

Curated and popular this week
 ·  · 55m read
 · 
Summary Last updated 2024-11-20. It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: * AI is the most important source of risk. * There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. * Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: 1. Why I prioritize x-risk over animal-focused longtermist work and global priorities research. 2. Why I prioritize AI policy over AI alignment research. 3. My beliefs about what kinds of policy work are best. Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate. Cross-posted to my website. I don't like donating to x-risk (This section is about my personal motivations. The arguments and logic start in the next section.) For more than a decade I've leaned toward longtermism and I've been concerned about existential risk, but I've never directly donated to x-risk reduction. I dislike x-risk on an emotional level for a few reasons: * In the present day, aggregate animal welfare matters far more than aggregate human welfare (credence: 90%). Present-day animal suffering is so extraordinarily vast that on some level it feels irresponsible to prioritize anything else, even though rationally I buy the arguments for longtermism. * Animal welfare is more neglected than x-risk (credence: 90%).[1] * People who prioritize x-risk often disregard animal welfare (or t
OscarD🔸
 ·  · 5m read
 · 
Yesterday I donated my 100,000th lifetime dollar.[1] A few weeks ago was the fifth anniversary of my GWWC pledge. And this is Donation Celebration Week. So high time to step back and reflect on giving. First a poem[2] (audio here). Then some facts, figures and reflections. Poem What should I do with all this money? Serious stuff, not so funny. Fancy luxuries, shiny and new, To that I say: pooh-pooh. A badminton racquet, a winter jacket. A climbing shoe, a trip or two. But that doesn’t count, I protest! I can’t work all day, we all need a rest. My younger self, to GiveWell donated. But now that’s passé, antiquated! Swamped by phylum Arthropoda, Digital minds, or even odder. What about patient philanthropy? If AI automates all industry, Returns could be massive, With investments passive. But money in a century, Won’t matter if all things are free. Even less if we’re all dead, “The lightcone’s ours,” the AIs said. So to the Longterm Future Fund (let’s make extinction moribund) I entrust this stack of cash. I thought it through, it wasn’t rash. The ghost of a child I could have saved, Says to me: “You’re depraved. We’re dying now, by the droves, For want of vaccines, clean cookstoves.” I didn’t kill you, don’t blame me. Your plight is plain for all to see. I don’t know how they could let You all die, sans bed net. But “they” is me, and I am they. You’re dead. I mustn’t look away. Is my fancy philosophy, More than just sophistry? And to all you shrimps and fishes, Sorry to deny your wishes. When our cruelty finishes, You’ll be in our hearts, not our dishes. Prose I have not previously been very public about donations because it feels strange to talk about and not all that informative (here I am, yet another of many EAs donating to the same few causes). But inspired by e.g. Jeff and Julia, Peter Singer, and Richard Chappel I am more persuaded talking publicly about donations is a healthy norm to promote. So here it goes. Timeline * 2018: I learn about EA