Jack Malde

I am working as an economist and previously worked in management consulting.

I am interested in longtermism, global priorities research and animal welfare. Check out my blog The Ethical Economist.

Please get in touch if you would like to have a chat sometime.

Feel free to connect with me on LinkedIn.

Topic Contributions

Comments

Critiques of EA that I want to read

When it comes to comparisons of values between PAVs and total views I don't really see much of a problem as I'm not sure the comparison is actually inter-theoretic. Both PAVs and total views are additive, consequentialist views in which welfare is what has intrinsic value. It's just the case that some things count under a total view that don't under (many) PAVs i.e. the value of a new life. So accounting for both PAVs and a total view in a moral uncertainty framework doesn't seem too much of a problem to me.

What about genuine inter-theoretic comparisons e.g. between deontology and consequentialism? Here I'm less sure but generally I'm inclined to say there still isn't a big issue. Instead of choosing specific values, we can choose 'categories' of value. Consider a meteor hurtling to earth destined to wipe us all out. Under a total view we might say it would be "astronomically bad" to let the meteor wipe us out. Under a deontological view we might say it is "neutral" as we aren't actually doing anything wrong by letting the meteor wipe us out (if you have a view that invokes an act/omission distinction). So what I'm doing here is assigning categories such as "astronomically bad", "very bad", "bad", "neutral", "good" etc. to acts under different ethical views - which seems easy enough. We can then use these categories in our moral uncertainty reasoning. This doesn't seem that arbitrary to me, although I accept it may still run into issues.

Critiques of EA that I want to read

I'm looking forward to reading these critiques! A few thoughts from me on the person-affecting views critique:

  1. Most people, myself included, find existence non-comparativism a bit bonkers. This is because most people accept that if you could create someone who you knew with certainty would live a dreadful life, that you shouldn't create them, or at least that it would be better if you didn't (all other things equal). So when you say that existence non-comparativism is highly plausible, I'm not so sure that is true...
  2. Arguing that existence non-comparativism and the person-affecting principle (PAP) are plausible isn't enough to argue for a person-affecting view (PAV), because many people reject PAVs on account of their unpalatable conclusions (which can signal that underlying motivations for PAVs are flawed). My understanding is that the most common objection of PAVs is that they run into the non-identity problem, implying for example that there's nothing wrong with climate change and making our planet a hellscape, because this won't make lives worse for anyone in particular as climate change itself will change the identities of who comes into existence. Most people agree the non-identity problem is just that...a problem, because not caring about climate change seems a bit stupid. This acts against the plausibility of narrow person-affecting views.
    • Similarly, if we know people are going to exist in the future, it just seems obvious to most that it would be a good thing, as opposed to a neutral thing, to take measures to improve the future (conditional on the fact that people will exist).
  3. It has been that argued that moral uncertainty over population axiology  pushes one towards actions endorsed by a total view even if one's credence in these theories is low. This assumes one uses an expected moral value approach to dealing with moral uncertainty. This would in turn imply that having non-trivial credence in a narrow PAV isn't really a problem for longtermists. So I think you have to do one of the following:
    • Argue why this Greaves/Ord paper has flawed reasoning
    • Argue that we can have zero or virtually  zero credence in total views
    • Argue why an expected moral value approach isn't appropriate for dealing with moral uncertainty (this is probably your best shot...)
On Deference and Yudkowsky's AI Risk Estimates

I'm confused by the fact Eliezer's post was posted on April Fool's day. To what extent does that contribute to conscious exaggeration on his part?

Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism)

My comment on your previous post should have been saved for this one. I copy the questions below:

  • What do you think is the best approach to achieving existential security and how confident are you on this?
  • Which chapter/part of "What We Owe The Future" do you think most deviates from the EA mainstream?
  • In what way(s) would you change the focus of the EA longtermist community if you could?
  • Do you think more EAs should be choosing careers focused on boosting economic growth/tech progress?
  • Would you rather see marginal EA resources go towards reducing specific existential risks or boosting economic growth/tech progress?
  • The Future Fund website highlights immigration reform, slowing down demographic decline, and innovative educational experiments to empower young people with exceptional potential as effective ways to boost economic growth. How confident are you that these are the most effective ways to boost growth?
  • Where would you donate to most improve the long-term future?
    • Would you rather give to the Long-Term Future Fund or the Patient Philanthropy Fund?
  • Do you think you differ to most longtermist EAs on the "most influential century" debate and, if so, why?
  • How important do you think Moral Circle Expansion (MCE) is and what do you think are the most promising ways to achieve it?
  • What do you think is the best objection to longtermism/strong longtermism?
    • Fanaticism? Cluelessness? Arbitrariness?
  • How do you think most human lives today compare to the zero wellbeing level?
Longtermist slogans that need to be retired

Well I’d say that funding lead elimination isn’t longtermist all other things equal. It sounds as if FTX’s motivation for funding it was for community health / PR reasons in which case it may have longtermist benefits through those channels.

Whether longtermists should be patient or not is a tricky, nuanced question which I am unsure about, but I would say I’m more open to patience than most.

Critiques of EA that I want to read

Broad longtermist interventions don't seem so robustly positive to me, in case the additional future capacity is used to do things that are in expectation bad or of deeply uncertain value according to person-affecting views, which is plausible if these views have relatively low representation in the future.

Fair enough. I shouldn't really have said these broad interventions are robust to person-affecting views because that is admittedly very unclear. I do find these broad interventions to be robustly positive overall though as I think we will get closer to the 'correct' population axiology over time.

I'm admittedly unsure if a "correct" axiology even exists, but I do think that continued research can uncover potential objections to different axiologies allowing us to make a more 'informed' decision.
 

Critiques of EA that I want to read

AI safety's focus would probably shift significantly, too, and some of it may already be of questionable value on person-affecting views today. I'm not an expert here, though.

I've heard the claim that optimal approaches to AI safety may depend on one's ethical views, but I've never really seen a clear explanation how or why. I'd like to see a write-up of this.

Granted I'm not as read up on AI safety as many, but I've always got the impression that the AI safety problem really is "how can we make sure AI is aligned to human interests?", which seems pretty robust to any ethical view. The only argument against this that I can think of is that human interests themselves could be flawed. If humans don't care about say animals or artificial sentience, then it wouldn't be good enough to have AI aligned to human interests - we would also need to expand humanity's moral circle or ensure that those who create AGI have an expanded moral circle.

Critiques of EA that I want to read

And, if there was a convincing version of a person-affecting view, it probably would change a fair amount of longtermist prioritization.

This is an interesting question in itself that I would love someone to explore in more detail. I don't think it's an obviously true statement. Two give a few counterpoints:

  • People have justified work on x-risk only thinking about the effects an existential catastrophe would have on people alive today (see here, here and here).
  • The EA longtermist movement has a significant focus on AI risks which I think stands up to a person-affecting view, given that it is a significant s-risk.
  • Broad longtermist approaches such as investing for the future, global priorities research and movement building seem pretty robust to plausible person-affecting views.

I’d really love to see a strong defense of person-affecting views, or a formulation of a person-affecting view that tries to address critiques made of them.

I'd point out this attempt which was well-explained in a forum post. There is also this which I haven't really engaged with much but seems relevant. My sense is that the philosophical community has been trying to formulate a convincing person-affecting view and has, in the eyes of most EAs, failed. Maybe there is more work to be done though.

How to dissolve moral cluelessness about donating mosquito nets

Ok, although it’s probably worth noting that climate change is generally not considered to be an existential risk so I’m not sure considerations of emissions/net zero are all that relevant here. I think population change is more relevant in terms of impacts on economic growth / tech stagnation which in turn should have an impact on existential risk.

How to dissolve moral cluelessness about donating mosquito nets

To a donor who would like to save lives in the present without worsening the long-term future, however, we may just have reduced moral cluelessness enough for them to feel comfortable donating bednets.

I have to admit I find this slightly bizarre. Such a person would accept that we can improve/worsen the far future in expectation and that the future has moral value. At the same time, such a person wouldn't actually care about improving the far future, they would simply not want to worsen it. I struggle to understand the logic of such a view.

Load More