Hide table of contents

Part 1 (15 mins.)

Helping in the present or in the future?

A commonly held view within the EA community is that it's incredibly important to start from thinking about what it really means to make a difference, before thinking about specific ways of doing so. It’s hard to do the most good if we haven’t tried to get a clearer picture of what doing good means, and as we saw in chapter 3, clarifying our views here can be quite a complex task.

One of the core commitments of effective altruism is to the ethical ideal of impartiality. Although in normal life we may reasonably have special obligations (eg. to friends and family), in their altruistic efforts aspiring effective altruists strive to avoid privileging the interests of others based on arbitrary factors such as their appearance, race, gender, or nationality. 

Longtermism posits that we should also avoid privileging the interests of individuals based on when they might live.

In this chapter's exercise we’ll be reflecting on some prompts to help you start considering what you think about this question, i.e. "Do the interests of people who are not alive yet matter as much as the interests of people living today?"

Please read this short description of temporal discounting and then spend a couple minutes thinking through each prompt, and note down your thoughts - feel free to jot down uncertainties, or open questions you have that seem relevant. We encourage you to note down your thought process, but feel free to simply report your intuitions and gut feelings. 

Of course, these thought experiments all assume an unrealistic level of certainty about your options and their outcomes. For the purpose of this exercise, however, we encourage you to accept the premise of the thought experiments instead of trying to find loopholes. The idea is to isolate one particular aspect of a situation (e.g., the timing of our impact) and try to get at our moral intuitions about just that aspect

  1. Suppose that you could save 100 people today by burying toxic waste that will, in 200 years, leak out and kill thousands. Would you choose to save the 100 now and kill the thousands later? Does it make a difference whether the toxic waste leaks out 200 years from now or 2000?
  2. Imagine you donate enough money to the Against Malaria Foundation (AMF) to save a life. Unfortunately, there’s an administrative error with the currency transfer service you used, and AMF isn’t able to use your money until 5 years after you donated. Public health experts expect malaria rates to remain high over the next 5 years, so AMF expects your donation will be just as impactful in 5 years time. Many of the lives that AMF saves are of children under 5, and so the life your money saves is of someone who hadn’t been born yet when you donated.

    If you had known this at the time, would you have been less excited about the donation?

Part 2 (30 mins.)

One question (among many) that is relevant to this topic is “when will we develop human-level AI?”. 

It’s obviously not possible to just look this up, or to gather direct data on this question. So we need to gather what data and arguments we have, and make a judgment call. This applies to AI and other existential risks, but also to most questions that we’re interested in - “How many chickens will move to better changes if we pursue this advocacy campaign?”, “How much do we need to spend on bednets to save a life?”.

These judgements are really important: they could make a big difference to the impact we have. 

Unfortunately, we don’t yet have definitive answers to these questions, but we can aim to become “well-calibrated.” This means that when you say you’re 50% confident, you’re right about 50% of the time, not more, not less; when you say you're 90% confident, you're right about 90% of the time; and so on. 

This exercise aims to help you become well calibrated. The app you’ll use contains thousands of questions - enough for many hours of calibration training - that will measure how accurate your predictions are and chart your improvement over time. Nobody is perfectly calibrated; in fact, most of us are overconfident. But various studies show that this kind of training can quickly improve the accuracy of your predictions. 

Of course, most of the time we can’t check the answers to the questions life presents us with, and the predictions we’re trying to make in real life are aimed at complex events. The Calibrate Your Judgment tool helps you practice on simpler situations where the answer is already known, providing you with immediate feedback to help you improve.

Have a go using the Calibrate Your Judgment app for around 30 minutes! 





New Answer
New Comment

4 Answers sorted by

I'm finding the app feedback misleading and none of the explanations in the About/FAQ page are expanding in my Chrome and Opera.

Thanks for flagging! I've sent a bug report to the developers of the app

Edit: they fixed it

While I am not a longtermist, I would not choose an action that would directly put the lives of others at risk even in 200 years. In the scenario, we are told that the toxic waste shall leak, therefore, it’s definite that there shall be thousands of livers lost. Compared to the 100 lives that would be lost now, I would not risk that many lives even though they are far in the future. While we have talked about discount functions, it would be immoral to treat human lives in that way. 

In the second scenario where we are asked about 200 years or 2000 years, temporal discounting comes at a higher rate. Thinking that far into the future is hard because it would need me to think of other things that might have happened, such as existential catastrophes that might wipe out humanity before then. In that case, I would do more evaluation such that if I have confidence that humanity would be wiped out in that time, then I would save the 100 people in the current time. However, this would only be in a case where I am very confident that humanity shall be lost by that time, meaning the toxic waste I bury would have no effect on people in that future.

Week 5 exercise.

A. I would save 100 people now by hurrying the waste as there are high chances that technology will be advanced after a decade and we might be able to save thousands of people in the future too. I will be working to save thousands of people in the future by contributing to research. B. I'd still be excited as even if it's about someone who isn't born yet I'd still be able to save them.

A. I will safe the 100 now that needs to be saved,and in the next 200 years to come,I will work out measures or path ways,which they can follow, towards minimizing their casualties then,since it's a must,that the occurrence,must occur. B. I would still be excited because, irrespective of the timing and looking at the issues,that made the transaction not to be reflective as at when I transferred,the bottom line for me is,it was used for same purpose, irrespective of time variations.Asuch, I will not be upset about the time variation,when it was used.