Hide table of contents

Correction: The confidence interval calculations don't correctly multiply the uncertainty, leading to over-confidence in the analysis.

A follow-up post addresses some of the flaws of this analysis, as well as the outcome: How my view on using games for EA has changed.

Note: highly speculative case study, with a lot of uncertainty over the numbers. 

In this post, I try to estimate the cost-effectiveness of making Mineralis, an upcoming role-playing video game (JRPG) about travel, altruism, and personal growth. 

Although the gameplay is typical, many aspects and ideas are inspired by EA concepts. This post is before the game's release, and I will make some assumptions/predictions. I'm planning a follow up post that examines how well they hold up. 

The game tries to push the boundaries of the JRPG genre in an experimental way, by having a strong focus on meaningfulness and cutting out all forms of filler content. It could be thought of as a "moonshot project".

Summary: 

  • Estimated conversion rate for EA participation or long-term altruistic lifestyle changes per player:
    0.1% (90% CI 0-11%). In other words, one person per ~1500 sales. 
  • Total development costs: ~1200 hours, ~2000 USD, and some opportunities.
  • The average similar game sells about 42 750 copies. This has insane variance; there is a 55% chance to sell less than 2500 copies.
  • Expected cost of ~1793 USD per EA participation or lifestyle change conversion (With an upper bound of ~65 000 USD / conversion.)

 

Background (skip this to avoid spoilers):

Following a bout with Covid, I've spent a vast majority of my free time during the past 9 months making Mineralis.

In Mineralis, the player follows the life of a military researcher, as they seek to end a conflict between two warring nations; one is orderly and highly technological, while the other rugged, poor, and led by rebels. 

The protagonist holds a lot of power over the conflict, and makes decisions that impact its outcome. In the process, they can choose to become sympathetic to one or neither of the belligerents.

As they find out, the worldviews that are presented in the beginning are not the entire truth. The player is challenged to update them in a way that is morally more defensible, and may end up feeling conflicted about supporting the impoverished rebel nation instead.

The reason I'm making the game is twofold:

  1. I'm frustrated at the shallowness of NPCs and story in most role-playing games - they're a huge missed opportunity in my opinion.
  2. I want to find out if video games are viable as a medium for exposing people to EA ideas and concepts.

Analysis:

Costs:

Firstly, the time cost. I've invested around 700-800 hours of time into the development work. Additionally, planning/studying has taken around 100, marketing/release preparation around 60, and feedback collection 40. Including miscellaneous tasks, around 1000 hours seems like a fairly reasonable ballpark estimate. I'd estimate I'm about 85% done with the work (90% CI 70-94), which brings the total time of making Mineralis to around 1200 hours.

In that time I've implemented 440 features, written 300 pages of dialogue, designed a deep combat system, created dozens of maps, and made hundreds of events.

Given a median developer salary of 80 000 USD / year, it's possible to come to the conclusion that the time cost of Mineralis is equivalent to 50 000 USD.

There are also monetary costs. The greatest so far are marketing (~800 USD) and dialogue editing/proofreading (~680 USD). With other miscellaneous work and advertising, the total monetary budget is close to 2000 USD. (The project is somewhat funding constrained, due to high uncertainty about its cost-effectiveness and marginal benefit.) 

Although I've been very efficient, there have been significant opportunity costs, and some social costs as well. I am grateful to my friends and partner for their patience throughout this experiment. 

Benefits:

There are numerous "hints" towards EA concepts, although the game is not labeled or branded as an "EA game". The characters introduce and debate ideas related to cost-effectiveness, wealth inequality, prioritization, marginal effectiveness, opportunity cost, career planning, longtermism, AI risk, nuclear risk, and large-scale conflict. 

There also are potentially relevant non-EA ideas related to e.g. philosophy, sources of wisdom, the nature of existence, mental wellbeing and depression, social anxiety, education, loss of loved ones, transhumanism, diversity, equity, equality, racism, societal structures, decision-making, teamwork, and conflict resolution. 

The specific subset of discussions/ideas depends on the route (which depends on player decisions). 

My hope is that players will be intrigued enough to further explore and learn about some of these ideas.

Cost-effectiveness:

Let's analyze possible upsides first. 

Based on initial feedback, I'd expect the EA ideas to be of interest to around 20% of players (90% CI 2-35%). I'd expect some of them to be interested enough to actually look up more material about a topic outside the game sooner than they otherwise would have. One test player indicated such action, and my best guess for the size of that subset is 25% (90% CI 15-45%). Further, some of those players may stumble upon the EA movement and read an article or two, where they otherwise wouldn't have. My speculative guess for that subset is 10% (90% CI 4-85%).

A portion of those people may join the movement (defined as joining one event or writing one post here), or take the GWWC pledge, or change to a more effective career, or start a lifelong habit of philanthropy (defined as donating over 1%p of their net income more than their peers). My wild guess is that the conversion rate from an article into actions taken is 20% (90% CI 5-65%).

Of course, this funnel is full of assumptions. But it's one data point to consider. 

Putting all those estimates together (in Fermi Paradox style), we get the following figures (assuming one can simply multiply CIs like this):

  • Total probability to study outside of the game: 5% (90% CI 0.3-20%) - one for every 30 copies sold. 
  • Total probability to look into EA: 0.5% (90% CI 0.01-17%) - one for every 300 copies sold. 
  • Total probability to participate or make lifestyle changes: 0.1% (90% CI 0-11%) - one for every 1500 copies sold. 

The number of sold copies is higher than number of players. Based on Steam playtime data, I estimate that ~15% of purchased games are never played and ~20% are played for less than 15 minutes. Thus I estimate that three sales will result in two players. 

Some set of players might be inspired to make a one-time donation, but otherwise not change their views or altruistic disposition. I think the overall impact of those players is negligible.

All in all, a 0.1% conversion rate to lifestyle changes seems like a reasonable ballpark to work with. I've played a few thousand games, and there have been a handful that caused me to update my views and habits.

What about the distribution of sales

Based on a data set on Steam sales, a similar game is (very roughly) this likely to sell more than a specific number of copies:

copies>2500>10 000>50 000>100 000>500 000
likelihood45%30%18%11%3%

Based on the data set, the expected value is (very roughly) 42 750 sales. Plugging that into the guesstimate above, we can (in theory) expect ~29 players to make lifestyle changes.

Given the combined cost of 52 000 USD to develop the game, with the average sales value we get an expected cost of ~1793 USD for a single lifestyle change or active EA conversion. 

However, you can see that the variance is huge. There is a 55% chance to go practically unnoticed. Considering the low marketing budget of Mineralis, I fear that the visibility may skew towards the lower end of that range.

Based on pre-marketing analytics data, I currently expect to have organic monthly averages of around 48 000 impressions, 13 000 page visits, 240 wishlists, and 50 purchases. That would lead to ~1200 sales over the first two years.

It's very difficult to guess the effect of marketing and events, but we can reasonably use that as a pessimistic lower bound. It would put the expected cost of a conversion at a maximum of ~65 000 USD.

Fingers crossed.

Open questions:

  1. Should the game break the fourth wall and direct interested players to an EA resource? Players hate losing immersion and even mentioning the name could be taken the wrong way by someone. Then again, I'd assume that anyone who e.g. clears the game with specific parameters would be interested.
  2. Would EA benefit from funding the advertisement and marketing costs of the game? It would be particularly relevant right now, due to the opportune timing of the early October Steam Next Fest, which Mineralis is taking part in.
  3. Although I personally enjoy working on games as a creative medium, it "feels" potentially very ineffective compared to, say, improving the UX and marketing funnel/tools of the AMF or GWWC websites or something. Any thoughts? What is the cost-effectiveness of typical EA meta-work?

If you're interested in Mineralis or want to give it a try, please wishlist it on Steam! Any feedback on the game or this post would be most welcome.
https://store.steampowered.com/app/1948210/Mineralis/ 

Comments2


Sorted by Click to highlight new comments since:

This seems like a very cool project, thanks for sharing! I agree that this type of project can be considered a "moonshot", which implies that most of the potential impact lies in the tail end of possible outcomes. Consequently the estimated become very tricky. If the EV is dominated by a few outlier scenarios, reality will most likely turn out to be underwhelming.

I'm not sure if one can really make a good case that working on such a game is worthwhile from an impact perspective. But looking at the state of things and the community as a whole, it does still seem preferable to me that somebody somewhere puts some significant effort into EA games (sorry for the pun).

Also, to add one possible path to impact this might enable: it might be yet another channel to deliberately nudge people into in order to expose them to the key EA ideas in an entertaining way (HPMOR being another such example). So your players might not all end up being "random people", but a ratio of them might be preselected in a way.

Lastly, it seems like at least 5-10 people (and probably considerably more) in EA are interested or involved in game development. I'm not familiar of any way in which this group is currently connected - would probably be worth doing so. Maybe something low on overhead such as a Signal group would work as a start?

Hey Markus, thank you for sharing your thoughts!

Considering that I need to cast a relatively wide net to find even thousands of players,  the messaging about Mineralis is not very selective on purpose. I think that the overall demographic will end up being relatively random within the set of people who are gamers and enjoy JRPGs. Whether JRPG fans have a predisposition towards EA - that remains to be seen. :)

I would be super humbled if the game ends up being a success and can be considered useful for the purpose of actually teaching EA concepts. However, I would imagine that a vast majority learns better through reading articles or listening to podcasts.

Setting up a group for discussion sounds like a valuable idea. I added a personal task for that and expect to get to it late next week. I'm thinking Discord might make the most sense, given that many developers already use it anyway.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f