Hide table of contents

Warning: Some spoilers follow


Free Guy (2020) is romantic comedy about a bank teller named Guy who falls in love with a woman. Then, when he discovers that his entire world is just a video game, he has to be the one to save it from destruction by a maniacal video game company executive.

Many people find it weird to think about the lives of digital sentient beings as morally valuable, whether they number a hundred (as in the film) or a trillion. But film is exhilarating, hilarious, and also so relatable that you ever don’t stop to wonder, wait, why do we care about a video game character at all? He’s not even real! Yet when the main character has an existential crisis about this fact, his best friend, Buddy, compassionately says:

I say, okay, so what if I’m not real? […] Look, brother, I am sitting here with my best friend, trying to help him get through a tough time. Right? And even if I’m not real, this moment is. Right here, right now, this moment is real. I mean, what’s more real than a person tryin’ to help someone they love? Now, if that’s not real, I don’t know what is.

Guy's best friend Buddy says, "Now, if that's not real, I don't know what is"

Free Guy is able to effortlessly have viewers sympathize with its digital agents because they’re just like humans, even as they’re AIs in a video game, written with a bunch of lines of code. (Not whole-brain emulations, for example.) They have rich, complex thoughts and lives, even as most characters lack the “free will” to deviate from a routine.

Free Guy isn’t sci-fi. It’s set in the present day, with present-day technology. And it keeps things small-scale with limited consequences for society. It doesn’t consider, how would the economy be transformed if we have human-level AI? What if the video game characters of Free City could not only write personal essays about feminism but also share these novel contributions with the rest of society? What would it look like to scale up the digital population of Free City by a billion times?

Nevertheless, it provides a glimpse of how life could be different for digitally sentient beings—in particular, what Shulman and Bostrom call “hedonic skew”. Despite living in a world like Grand Theft Auto, where bank robberies and gun violence are daily experiences, the characters are still upbeat and optimistic. Fortunately, they’re eventually transferred to a world built for them rather than the entertainment of human players, where they can live out their lives with friendship and harmony.

Free Guy is the most powerful (and funniest) film about artificial consciousness that I know of. If you’re looking for an EA-relevant movie to add to your watch list, I strongly recommend Free Guy, for both its entertainment and philosophical value.


Postscript: If I were to make an actual argument in this post: Many people think that digital sentience is too weird to advocate for at this stage. Although I have not tried this with other people yet, the film Free Guy might be a promising conduit for promoting concern for digital sentient beings – when paired with relevant discussion, as the few reviews I've read of Free Guy don't touch upon moral consideration of digital sentient beings.

Comments2


Sorted by Click to highlight new comments since:

I think what's great about Free Guy is that the AI part is not the center of the plot most of the time. Rather it's a story about some characters who find themselves in some unusual circumstances. That might not seem much different, but compare typical AI films that spend a lot of time being about AI rather than the characters. By being character-focused, I think it delivers on ideas better than most idea movies that get so caught up in the ideas they forget to tell a good story.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f