Hide table of contents

This is my first post, and it is only really a question but I have an initial view that I intend to defend further in the context of longtermism. I'm making this post to get initial responses to the distinction in case I have missed something obvious!

When we say that we want to maximise value, do we want to maximise currently instantiated value, or cumulative value up to this point? It seems that this is an important distinction that may have implications on which interventions can be considered to maximise value. I will argue that we should adopt a framework which maximises cumulative value (CV)

The Distinction

Consider a blowout party. I will define this as a truly Hedonistic utility maximising party where everyone overdoses. Everyone experiences a huge amount of welfare for a short time and then nothing. if we care only about Instantaneous Value (IV) at a given time, a value-time graph will look like this: 

 

According to the IV framework then, the expected value of the blowout intervention must follow some sort of step function (takes a value and then drops instantly to zero). The intricacies of analysing this calculation are unnecessary, it is only necessary to see that in this case the Blowout scenario will ultimately have neutral expected value.

On the other hand, if we care about maximising Cumulative Value (CV) the graph will instead look like this:

In this framework the expected value of the intervention is proportional to the value introduced. We get all of this additional value added to the overall value of the universe and it remains.

This can be more concisely phrased as whether we care about maximising the y-value of the value-time graph, or whether we care about maximising the integral. In many cases these consequences are correlated but the case described above is a non-unique example in which they diverge. (mathematicians will be able to find trivially many examples of functions where the integral seems not to depend wholly on specific values of y at given points).

Cumulative Value is more intuitive

Intuitively I think this scenario points to caring more about cumulative value. And this makes sense as EAs who are neutral about when and where people exist, it shouldn't matter if a person's high welfare comes now or a person's welfare comes hundreds of years in the future/past. 

Also it seems intuitive that a happy life led now is a good thing. If a person lives a happy life, their death should not remove the value their life added to the world.

How long is an instant

Another question that it is important to ask is how long we may take an instant to be. If we care only about the value being instantiated at any given time, it seems reasonable to ask how long that given period of time is. Practically of course this will make EV calculations incredibly complex. Whereas, if you consider cumulative value, we only need consider the value added by that total project over it's acting period. because the value intoduced is being summed over everything.

Concluding

I am not at all certain of anything said here and would love some conversation about it. In particular I am not certain the noise in the IV graph would not also be present in the CV graph. 

Comments3


Sorted by Click to highlight new comments since:

Hi !

Congrats on sharing your first post here !

Sorry for the unpolished bullet points, I’m in a bit if a hurry right now and would probably forget later, but I think it may still be worth it to point out a few things :

  • The way value is defined seems not clearly stated but implicitly guessable to me here. If you look at counterfactual value, the instant value in the blowout scenario could and would probably be negative. It seems to me that you consider value in a non-counterfactual way that attributes zero on the instant scale to nothing existing at all and zero on the cumulative scale to nothing having ever existed. Is that indeed what you have in mind ?
  • About the math : as long as you are integrating a continuous positive function of a real variable on a segment, you’ll get a positive increasing continuously differentiable function. This may help you reflect on the uncertainty you mention at the end regarding noise.
  • I agree with you that CV is more intuitive - I sometimes think theoretically about maximizing or minimizing (just a matter of convention) some integral over spacetime spanning the whole universe for space and its whole period of past, present and future existence for time but that’s impractical. However, I think the way you present your example actually makes things look unintuitive. A party where everyone dies at the end is given a positive value but no, I still wouldn’t attend given the chance. My take is, if you think in terms of counterfactual cumulative value it would be negative, and if you set the zero to nothing having ever existed, your scenario may be positive, but probably less than the scenario that would have happened otherwise.

This makes a lot of sense, Thanks for highlighting the need to define value more explicitly. I'll have a look into this stuff!

on the math point - I don't think that IV would be continuous is the problem, but in general this would mean the noise is present in both frameworks! The case of x^2 sin(1/x) shows the integral of a function with a discontinuity is not necessarily discontinuous but in general discontinuous functions would have a discontinuous integral so the noise doesn't distinguish the frameworks.

Thank you!

Thanks for your answer !

I thought IV was assumed continuous based on your drawing. Still, I’d be surprised - and I would love to know about it - if you could find an function with a discontinuous integral and does not seem unfit to correctly model IV to me - both out of interest for the mathematical part and of curiosity about what functions we respectively think can correctly model IV.

I think that piecewise continuity and local boundedness are already enough to ensure continuity and almost-everywhere continuous differentiability of the integral. I personally don’t think that functions that don’t match these hypotheses are reasonable candidates for IV, but I would allow IV to take any sign. What are your thoughts on this ?

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f