Looks to me like Yudkowsky was wrong and there was a fire alarm. (To be fair, if you had asked me in 2017 there is no way I'd have predicted that AI risk is as mainstream as it is now.)
Really great post, agree with almost everything, thanks for writing!
(More speculatively, it seems plausible to me that many EAs have worse judgement of character than average, because e.g. they project their good intentions onto others.)
Agreed. Another plausible reason is that system 1 / gut instincts play an important role in character judgment but many EAs dismiss their system 1 intuitions more or experience them less strongly than the average human. This is partly due to selection effects (EA appeals more to analytical people) but perhaps also because several EA principles emphasize putting more weight on reflective, analytical reasoning than on instincts and emotions (e.g., the heuristics and biases literature, several top cause areas (like AI) aren't intuitive at all, and so on). [1]
That's at least what I experienced first hand when interacting with a dangerous EA several years ago. I met a few people who had negative impressions of this person's character but couldn't really back them up with any concrete evidence or reasoning, and this EA continued to successfully deceive me for more than a year.[2] Personally, I didn't have a negative impression in the first place (partly because the concept of a non-trustworthy EA was completely out of my hypothesis space back then) so other people were clearly able to pick up on something that I couldn't.
To be clear, I'm not saying that reflective reasoning is bad (it's awesome) or that we now should all trust our gut instincts when it comes to character judgment. Gut instincts are clearly fallible. The average human certainly isn't amazing at character judgment, for example, ~50% of US Americans have voted for clearly dangerous people like Trump.
FWIW, my experiences with this person were a major inspiration for this post.
Epistemic status: I wrote this quickly (for my standards) and I have ~zero expertise in this domain.
It seems plausible that language models such as GPT3 inherit (however haphazardly) some of the traits, beliefs and value judgments of human raters doing RLHF. For example, Perez et al. (2022) find that models trained via RLHF are more prone to make statements corresponding to Big Five agreeableness than models not trained via RLHF. This is presumably (in part) because human raters gave positive ratings to any behavior exhibiting such traits.
Given this, it seems plausible that selecting RLHF raters for more desirable traits—e.g., low malevolence, epistemic virtues / truth-seeking, or altruism—would result in LLMs instantiating more of these characteristics. (In a later section, I will discuss which traits seem most promising to me and how to measure them.)
It’s already best practice to give human RLHF raters reasonably long training instructions and have them undergo some form of selection process. For example, for InstructGPT, the instruction manual was 17 pages long and raters were selected based on their performance in a trial which involved things like ability to identify sensitive speech (Ouyang et al., 2022, Appendix B). So adding an additional (brief) screening for these traits wouldn’t be that costly or unusual.
Talking about stable traits or dispositions of LLMs is inaccurate. Given different prompts, LLMs simulate wildly different characters with different traits. So the concept of inheriting dispositions from human RLHF raters is misleading.
We might reformulate the path to impact as follows: If we train LLMs with RLHF raters with traits X, then a (slightly) larger fraction of characters or simulacra that LLMs tend to simulate will exhibit the traits X. This increases the probability that the eventual character(s) that transformative AIs will “collapse on” (if this ever happens) will have traits X.
I don’t know how the RLHF process works in detail. For example, i) to what extent is the behavior of individual RLHF raters double-checked or scrutinized, either by AI company employees or other RLHF raters, after the initial trial period is over, and ii) do RLHF raters know when the trial period has ended? In the worst case, trolls could behave well during the initial trial period but then, e.g., deliberately reward offensive or harmful LLM behavior for the lulz.
Fortunately, I expect that at most a few percent of people would behave like this. Is this enough to meaningfully affect the behavior of LLMs?
Generally, it could be interesting to do more research on whether and to what extent the traits and beliefs of RLHF raters influence the type of feedback they give. For example, it would be good to know whether RLHF raters that score highly on some dark triad measure in fact systematically reward more malevolent LLM behavior.
Which traits precisely should we screen RLHF raters for? I make some suggestions in this section below.
I list a few suggestions for traits we might want to select for below. All of the traits I list arguably have the following characteristics:
Ideally, any trait which we want to include in a RLFH rater selection process should have these characteristics. The reasons for these criteria are obvious but I briefly elaborate on them in this footnote.[2]
This isn’t a definitive or exhaustive list by any means. In fact, which traits to select for, and how to measure them (perhaps even developing novel measurements) could arguably be a research area for psychologists or other social scientists.
One common operationalization of malevolence are the dark tetrad traits, comprising machiavellianism, narcissism, psychopathy, and sadism. I have previously written on the nature of dark tetrad traits and the substantial risks they pose. It seems obvious that we don’t want any AIs to exhibit these traits.
Fortunately, these traits have been studied extensively by psychologists. Consequently, brief and reliable measures of these traits exist, e.g., the Short Dark Tetrad (Paulhus et al., 2020) or the Short Dark Triad (Jones & Paulhus, 2014). However, since these are merely self-report scales, it’s unclear how well they work in situations where people know they are being assessed for a job.
(I outlined some of the benefits of truthfulness above, in the third bullet point of this section.)
It’s not easy to measure how truthful humans are, especially in assessment situations.[3] Fortunately, there exist reliable measures for some epistemic virtues that correlate with truthfulness. For example, the argument evaluation test, (Stanovich & West, 1997) or the actively open-minded thinking scale (e.g., Baron, 2019). See also Stanovich and West (1988) for a classic overview of various measures of epistemic rationality.
Still, none of these measures are all that great. For example, some of these measures, especially the AOT scale, have strong ceiling effects. Developing more powerful measures would be useful.
Pragmatic operationalization: forecasting ability
One possibility would be to select for human raters above some acceptable threshold of forecasting ability as forecasting skills correlate with epistemic virtues. The problem is that very few people have a public forecasting track record and measuring people’s forecasting ability is a lengthy and costly process.
In some sense, altruism or benevolence are just the opposite of malevolence[4], so perhaps we could just use one or the other. HEXACO honesty-humility (e.g., Ashton et al., 2014) is one very well-studied measure of benevolence. Alternatives include the self-report altruism scale (Rushton et al., 1981) or behavior in economic games such as the dictator game.
Cooperativeness, however, is a somewhat distinct construct. Others have written about the benefits of making AIs more cooperative in this sense. One measure of cooperativeness is the cooperative personality scale by Lu et al. (2013).
Harm aversion could also be desirable because it might translate into (some form of) low-impact AIs. On the other hand, (excessive) instrumental harm aversion can come into conflict with consequentialist principles.
As mentioned above, this is by no means an exhaustive list. There are many other traits which could be desirable, such as empathy, tolerance, helpfulness, fairness, intelligence, effectiveness-focus, compassion, or wisdom. Other possibly undesirable traits include spite, tribalism, partisanship, vengefulness, or (excessive) retributivism.
Ashton, M. C., Lee, K., & De Vries, R. E. (2014). The HEXACO Honesty-Humility, Agreeableness, and Emotionality factors: A review of research and theory. Personality and Social Psychology Review, 18(2), 139-152.
Baron, J. (2019). Actively open-minded thinking in politics. Cognition, 188, 8-18.
Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., ... & Saunders, W. (2021). Truthful AI: Developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674.
Forsyth, L., Anglim, J., March, E., & Bilobrk, B. (2021). Dark Tetrad personality traits and the propensity to lie across multiple contexts. Personality and individual differences, 177, 110792.
Lee, K., & Ashton, M. C. (2014). The dark triad, the big five, and the HEXACO model. Personality and Individual Differences, 67, 2-5.
Lu, S., Au, W. T., Jiang, F., Xie, X., & Yam, P. (2013). Cooperativeness and competitiveness as two distinct constructs: Validating the Cooperative and Competitive Personality Scale in a social dilemma context. International Journal of Psychology, 48(6), 1135-1147.
Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., ... & Kaplan, J. (2022). Discovering Language Model Behaviors with Model-Written Evaluations. arXiv preprint arXiv:2212.09251.
Rushton, J. P., Chrisjohn, R. D., & Fekken, G. C. (1981). The altruistic personality and the self-report altruism scale. Personality and individual differences, 2(4), 293-302.
Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of educational psychology, 89(2), 342.
Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of experimental psychology: general, 127(2), 161.
Though, to be fair, this snapshot of the instruction guidelines seem actually fair and balanced.
i) is important because the trait is otherwise not very consequential, ii) is obvious, iii) is more or less necessary because we otherwise couldn’t convince AI companies to select according to these traits because they would disagree or because they would fear public backlash, iv) is required because if we can’t reliably measure a trait in humans, we obviously cannot select for it. The shorter the measures, the cheaper they are to employ, and the easier it is to convince AI companies to use them.
Though dark tetrad traits correlate with a propensity to lie (Forsyth et al., 2021).
For instance, HEXACO honesty-humility correlates highly negatively with dark triad traits (e.g., Lee & Ashton, 2014).
Thanks, these are good points.
I do think it's plausible that (some!) EA leaders made substantial mistakes. Spotting questionable behavior or character is hard but not impossible, especially if you have known them for 10 years and work very closely with them and basically were in a mentee-mentor relationship (like e.g. Will, is my impression). I don't fault other people, e.g. those who rarely or never interacted with SBF, for not having done more.
Either people ignored warning signs -> clear mistake. Or they didn't notice anything even though others had noticed signs (like e.g. Habryka)-> suboptimal character judgment. I think the ability to spot such people and don't let them into positions of power is extremely important.
Of course, the crucial question is what could have been done even if you know 100% that SBF is not at all trustworthy. It's plausible to me that not much could have been done because SBF already accumulated so much power. So it's plausible that no one made substantial mistakes. On the other hand, no one forced Will to write Musk and vouch for SBF which perhaps wasn't wise if you have concerns about SBF. On the other hand, it's perhaps also reasonable to gamble on SBF given the inevitable uncertainty about other's character and the large possible upsides. Perhaps I'm just suffering from hindsight bias.
Also, just to be clear, I agree that much of the criticism against EAs and EA leaders we see in the media is unfairly exaggerated. I'm wary of contributing to what I perceive as others unjustly piling-on a movement of moral activists, probably fueled by do-gooder derogation, and so on (as Geoffrey mentions in his comment.)
(ETA: Sorry for not engaging with everything you wrote. I'm short on time and I'll try to elaborate on my views in a week or so.)
Just to clarify my position: I think it's clear that we put SBF on a pedestal and promoted him as someone worth emulating, I don't really know what to say to someone who disagrees with this. (Perhaps you interpret the phrase "put someone on a pedestal" differently; yes, we didn't built statues of SBF, I agree.)
But I also think that basically almost all of this has been completely understandable. I mean, guy makes 10B dollars and wants to donate it all? One needs to be deranged to not try to emulate him, to not want to learn from him and to not paint him as highly morally praiseworthy. I certainly tried emulating that and learning from SBF (with little success obviously). At the time, I didn't think that we went too far. I even thought the sticker thing was kinda funny (if weird and inadvisable), but I didn't really give it much thought at all at the time.
At EAG London 2022, they [ETA: this was an individual without consent of the organizers] distributed hundreds of stickers depicting Sam on a bean bag with the text "what would SBF do?". To my knowledge, never before were flyers depicting individual EAs at EAG distributed. (Also, such behavior seems generally unusual to me, like, imagine going to a conference and seeing hundreds of flyers and stickers all depicting one guy. Doesn't that seem a tad culty?)
On the 80k website, they had several articles mentioning SBF as someone highly praiseworthy and worth emulating.
Will vouched for SBF "very much" when talking to Elon Musk.
Sam was invited to many discussions between EA leaders.
There are probably more examples.
Generally, almost everyone was talking about how great Sam is and how much good he has achieved and how, as a good EA, one should try to be more like him.
I wanted to push back on this because most commenters seem to agree with you. I disagree that the writing style on the EA forum, on a whole, is bad. Of course, some people here are not the best writers and their writing isn't always that easy to parse. Some would definitely benefit from trying to make their writing easier to understand.
For context, I'm also a non-native English speaker and during high school, my performance in English (and other languages) was fairly mediocre.
But as a whole, I think there are few posts and comments that are overly complex. In fact, I personally really like the nuanced writing style of most content on the EA forum. Also, criticizing the tendency to "overly intellectualize" seems a bit dangerous to me. I'm afraid that if you go down this route you shut down discussions on complex issues and risk creating a more Twitter-like culture of shoehorning complex topics into simplistic tidbits. I'm sure this is not what you want but I worry that this will be an unintended side effect. (FWIW, in the example thread you give, no comment seemed overly complex to me.)
Of course, in the end, this is just my impression and different people have different preferences. It's probably not possible to satisfy everyone.
Thanks for sharing, I thought this was interesting and relatable.
For what it's worth, you seem like a really committed person to me, so I wouldn't call you lazy (if you're "lazy", why can you work 50 hours and managed to perform well in the military?). In some cheeky sense, you might have benefited from being more lazy and "giving up" sooner, rather than trying to push yourself to make it work for years, always hoping that change is around the corner.
In my early twenties I also tried to study computer science and programming for similar reasons (AI safety research, EtG potential). I think I basically gave up after like 1-2 weeks because I did not like it. In some sense, you could say that my own laziness saved me from making the potentially huge mistake of pursuing something for a few years and then burning out/getting stuck in the sunk cost fallacy, etc.
Though that's usually not how I view it. Over the years I've often blamed myself for being a lazy quitter and that I should have tried harder back then to study CS. Otoh, stories like yours are (weak) evidence that it probably wouldn't have ended well and that I should be glad to have continued to study where my personal fit was higher even though it was (way) less impactful.
Anyways, enough rambling about myself. In my book, you tried really hard to have impact and showed real courage in sharing your story. I think you're cool. :)