Jack Malde

Bio

I am working as an economist and previously worked in management consulting.

I am interested in longtermism, global priorities research and animal welfare. Check out my blog The Ethical Economist.

Please get in touch if you would like to have a chat sometime.

Feel free to connect with me on LinkedIn.

Comments
531

Prioritising global poverty doesn't necessarily mean you discount the future. You might prioritise it because you don't think the leading longtermist interventions are tractable or for other reasons e.g. population ethics.

I personally don't find just considering short-term impact to be helpful but appreciate others disagree.

I don't see how your two counterarguments are arguments against a longtermist framing. Rather they seem to be arguments against the benefits of economic growth if one has accepted a longtermist framing (maybe this is what you meant?). If you're saying the benefits of economic growth don't stretch into the long-term then that acts against economic growth, although for the record I'm not convinced on the truth of claim (1).

I ran a quick regression of the developed countries in Easterlin’s dataset, and the coefficient on GDP growth actually increases. This implies that an income doubling actually matters more in rich countries today than it does for poorer countries.

How long can this continue? That's what I wonder. When we're all rich enough to have a pretty nice set of material possessions, is more growth going to continue to boost our happiness? Certainly in my own life I don't want more material goods - I want better relationships and more meaning. Also there have been criticisms of using SWB data in low-income settings - see my comment here

Lastly, I share the intuition that we should not worry as much about happiness as existential security. This may be another argument against thinking too much about stimulating the kind of growth that would compound into the long-term. That kind of growth would likely be frontier growth, potentially increasing existential risk as we develop new technologies.

For the record, some have argued that technological progress and economic growth can lower existential risk. This is because we can develop technologies that can allow us to reduce risks, as well as becoming richer making us more concerned with safety. See here and also Will MacAskill's new book What We Owe the Future which discusses the risks of stagnation.

Generally this discussion, even within EA circles, is much too short-termist.

Most people in EA don’t discount the future much, if at all. This means they will care about timescales of thousands or millions of years. The compounding nature of economic growth means that increased growth could result in people in the mid-far future becoming much richer than they otherwise would have been. So even a small impact of a typical annual rate of economic growth on happiness can mean a lot of extra happiness over a long time frame. This impact, you rightly imply, may not be able to be matched by other interventions.

The best counterarguments to this are:

  • We cannot sustain our current rates of growth for very long before reaching physical limits. So growth can only do so much, even over the long-term
  • Diminishing marginal utility means we may reach a limit to the extent to which being wealthier can make one happier. We might even be nearing this point now (considering more developed countries). At this point we would need an alternative approach to increasing wellbeing, or we could just make more and more people to increase total happiness. An alternative approach I would be interested in is research into psychedelics - this might help us make everyone incredibly happy indeed. Then it would just be a matter of increasing the number of people!

For the record I still think from a longtermist point of view we shouldn’t be thinking of happiness at all, and instead about navigating between lock-in scenarios a la MacAskill/Ord. When/if we reach existential security we can then think about happiness.

Are you sure you linked to the right place? I don’t see an example of a role you think is very harmful

I think you meant to reply to Yonatan Cale and not me?

You might not be aware of them if you haven't signed up for Facebook careers alerts or look at the Facebook careers website regularly.

Of course you might say "just sign up for the careers alerts then". But you'd then want to do this for all of the impressive organisations that you would potentially want to work for, of which there may be quite a few. Two possible downsides of this are:

  • You might miss a few good options accidentally. Maybe the places to work as a software engineer are pretty obvious, but this won't always be the case. For example, maybe someone looking to build career capital in policy won't be aware of all of the good options available, including individual think tanks or other organisations that do impactful policy work. I work at the Confederation of British Industry (CBI) which surprisingly many people aren't aware of (I met Will MacAskill one time and he hadn't heard of it!) - but it's got a pretty good reputation and soon after I joined the CBI someone left to become Executive Director of the Centre for Data Ethics & Innovation, a pretty high impact and EA-relevant role. Furthermore I don't think the role she came from at the CBI was a directly high impact role (by EA lights), so the role she came from would have been excluded from 80K's job board under your preference. In short, I doubt everyone is automatically aware of all good career capital roles.
  • It might be annoying to have loads of career alert emails when you could see them all in one place. I quite like getting the email from 80,000 Hours reminding me to look at the job board and then seeing everything in one place. Makes life kind of easy! I don't 100% rely on the 80K job board, but if the 80K job board covers all bases then one could rely on it guilt-free, and it might make life easier for them.

This is a much simpler problem that I'm happy to help with

I'd rather all roles be summarised in one place for simplicity. If people are concerned about not knowing which roles are for career capital vs direct impact then 80K can signpost that - which I think I am in favour of. I'm not sure why removing the career capital roles would be the better approach - I think it would be a loss of value.

Some of these are potentially overly damaging

Can you give some examples? I'm interested.

"I'm just grateful 80K has listed them for me"

When I said this I was referring to all roles they list not just the career capital ones by the way.

I think I’d be in favour of you guys indicating for each role if it is:

  • Mainly there for direct impact
  • Mainly there for career capital
  • There for both impact and career capital (I.e. neither one clearly dominates the other)

Obviously whoever is putting the role on the board is doing so for a reason, so should know the answer to this question.

Personally I’m super grateful 80,000 Hours posts roles that are good for career capital, even if they are at organisations that have questionable overall impact, and even if the roles themselves may actively do harm. This is because I’m still fairly young and am interested in building career capital!

It’s plausible a software engineering role at Facebook might do harm, but I still think many EAs would rightly jump at this opportunity, and I’d rather have an EA in the role than a non-EA.

Also, I feel like it’s pretty easy for me to know which roles are directly impactful and which are for career capital (or which are both). For example, roles that aren’t clearly related to one of 80K’s top problems are usually going to be there for career capital reasons - these are also usually at very well known orgs. 80K actually gives some examples of organisations some people might think are harmful but which they still recommend roles for in their guidance - these being Amazon, Facebook and the US military. It seems pretty obvious to me as someone who has read the rest of 80K’s guidance how to judge the roles for myself and I’m just grateful 80K has listed them for me.

So I think it would be a big loss if 80,000 Hours stopped posting these roles. Should they make it clearer which roles are immediately directly impactful and which aren’t? Maybe. This would be catering for people who can’t figure it out for themselves, but that isn’t necessarily a bad thing…

In practice you would have to make an assumption that people generally report on the same scale. There is some evidence from happiness research that this is the case (I think) but I’m not sure where this has got to.

From your original question I thought you were essentially trying to understand, in theory, what weighting one unit of pain as greater than one unit of pleasure might mean. As per my example above, one could prioritise a one unit change on a self-reported scale if the change occurs at a lower position on the scale (assuming different respondents are using the same scale).

Another perspective is that one could consider two changes that are the same in “intensity”, but one involves alleviating suffering (giving some food to a starving person) and one involves making someone happier (giving someone a gift) - and then prioritising giving someone the food. For these two actions to be the same in intensity, you can’t be giving all that much food to the starving person because it will generally be easy to alleviate a large amount of suffering with a ‘small’ amount of food, but relatively difficult to increase happiness of someone who isn’t suffering much, even with an expensive gift.

Not sure if I’m answering your questions at all but still interesting to think through!

Load More