Introduction
When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2]
In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior.
There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3]
Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
When I read your scripts and Rob is interviewing, I like to read Rob’s questions at twice the speed of the interviewees’ responses. Can you accommodate that with your audio version?
Thanks for the suggestion David! We're discussing adding this as a premium feature — perhaps activated only for Giving What We Can members.
I listen at 5x speed and I’d find it much easier if you could add some filler words (like “um”, “ah”, “like”, “you know”) into the audio versions of the transcripts. This would aid with comprehension.
This would would make the difference between me bothering to listen and being compelled to trash it on twitter.
Thanks so much for all you do! Very much appreciate it 😀
If you like, I have some extra bandwidth this week and could transcribe some of them.
Is it possible to first translate it to Latin and then back to English? I do this with the text versions of the podcast transcripts and find that it improves the quality of reasoning of the host in particular.
We love to see / hear it!
Can you confirm that you will be using text-to-audio AI voices that have been trained on Rob's actual voice and the other guests' and hosts' voices? Kelsey Piper's new blog, Planned Obsolescence, does this, and it works very well (with a delicious undertone of uncanny valley). I think it would be great for authenticity, and strengthening the parasocial bond between audience and creator.
If this proves too costly, I would suggest it's still worth just getting the hosts and guests to read out the transcripts themselves, because the benefits of voice authenticity are hard to overstate.
Oh god, I’ve done this unironically when the difference in speed was too great between the speakers. Otherwise I would’ve had to listen to it at 1.5x or switch back and forth all the time. xD
This is interesting, but I'm not sure I'll have the time to listen to it. Maybe make transcripts of these audio versions?
Thanks Bella.
I'm just wondering, would there be an audio description feature to make it feel more alive? Eg: If t he script calls for [manic laughter], will this be replicated or will it only be read aloud as such?
Many thanks
Bill
Someone says they'd actual use it!
Excuse the typos, I'll vouch for Tripp.