Hide table of contents

Google Doc | Author: Mati Roy | Created: 2020-11-28 | Published: 2020-12-08 | Updated: 2020-12-08 | Version: 2 | Epistemic status: 50% this is roughly right | Acknowledgement for feedback: Carl Shulman, Aahan Rashid, Kirsten Horton

Summary

Money lotteries, ie. a lottery where you gamble money, are useful when you’ve identified donation/purchase opportunities that have increasing marginal returns beyond your budget.

Time lotteries, ie. a lottery where multiple parts of your moral parliament gamble access to your mind, are useful when time has increasing marginal returns beyond one of your value’s budget at a sufficiently low premium cost for the other values. I think this is likely the case for people that are <1-20% altruistic.

Model

We can model the problem as having a fixed hour-budget to be used for fulfilling a certain value, with some of the hours being used for personal research and others being used to earn money to donate. The function would be:

Here I’m using “time” as a unit for “capacity to do work”, which also includes things like mental energy.

The research and where you give the money can be very meta — for example, you can research whom you should give money to research where to give money.

Unified vs fragmented selves

This is new terminology I’m using.

By “unified self”, I mean an agent with one overarching morality — for the purpose of this text, a utility function.

By “fragmented self”, I mean an agent with many subagents taking decisions through a mechanism such as a moral parliament.

If you’re a unified self (ex.: a pure positive hedonistic utilitarian), then you will spend all your time fulfilling your morality. You can use lotteries if you can’t otherwise gather enough money, includying:

  • All the money you have
  • All the debts you could acquire
  • All the money you can earn in the future (unless the opportunity is time sensitive)

If you’re a fragmented self, then you can also use lotteries to bet money. But on top of it, you can also use time lotteries, ie. a lottery where multiple parts of your moral parliament gamble access to your mind.

For example, it seems to me like altruistic values benefit from a large amount of research and reflection. So if only 1% of your self is altruistic, then I would argue it would be better for your altruistic values to have a 20% chance of using 5% of your (life) time rather than a 100% chance of using 1% of your (life) time. I think this remains true even after adding a premium to compensate your non-altruistic values.

Time lotteries can be difficult as they require the ability to make pre-commitments. If you use a quantum lottery, it’s easy for different altruistic agents to cooperate as they know that if they defect in their branch*, it likely means the other versions will also defect. However, an egoistic agent might not care if the other versions don’t respect their commitment as it wouldn’t impact the egoistic agent’s values.

*Note: Even if you reject the many-world interpretation of quantum mechanics, and instead think the wave function collapses randomly, if you think the world is spatially infinite (with random initial conditions), you can still know there will be other versions of you where the wave function collapsed elsewhere, and so it’s still in the interest of your altruistic values to cooperate.

In practice

Time lotteries

In terms of utilitarian values, I think time has increasing marginal return for a long time before starting decreasing — possibly a couple full-time years (related: Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness). So if you’re <1-20% altruistic, I think it makes sense for your different value systems to participate in a time lottery (although maybe you want to use some of your budget to verify this). If you’re 30-90% altruistic, I think you will have hit diminishing marginal returns for utilitarian values, but even if not, I think the premium charged by your non-altruistic values to enter a time lottery would likely become too high.

Note: In the model I use, even if both R(t) and E(t) have diminishing marginal returns, it can still be that ∫R(t)⋅dt * ∫E(t)⋅dt has increasing marginal return.

For example, with the following functions (see image below), it’s almost 50% better to have 4 units of time with 50% probability than 2 units of time with 100% probability.

CEA’s donor lotteries can be used for that, but really you just need to generate a random number yourself, along with having a commitment mechanism. Although, maybe having the lottery being public serves as a commitment mechanism.

Money lotteries

As for using money, I think it generally has diminishing marginal returns. One potential notable exception is for biostasis (see: Buying micro-biostasis). To the extent there are other exceptions, then money lotteries are useful. (Do you have other notable examples?)

Reducing variance

At the group level, reducing the variance of resources dedicated to a value is good if the resources have diminishing marginal returns past one individual worth of resources, which seems often the case.

Two ways to reduce the variance:

  • Participate in a quantum lottery so that each of your values is fulfilled in at least some branches (see instructions in Buying micro-biostasis)
  • Participate in a money lottery with people interested in a similar value, so that even if you loose, the money should still be spent in a way that is relatively close to your values
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something
Recent opportunities in Effective giving
73
· · 3m read