jackmalde

I am working as an economist at the Confederation of British Industry (CBI) and previously worked in management consulting.

I am interested in longtermism, global priorities research and animal welfare

I’m happy to have a chat sometime: https://calendly.com/jack-malde

Feel free to connect with me on LinkedIn: https://www.linkedin.com/in/jack-malde/

Comments

Concerns with ACE's Recent Behavior

Thanks for writing this comment as I think you make some good points and I would like people who disagree with Hypatia to speak up rather than stay silent.

Having said that, I do have a few critical thoughts on your comment. 

Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn't a particularly valuable approach.

I don’t think this was Hypatia’s main issue. Quoting Hypatia directly, they imply the following are the main issues:

  • The language used in the statement makes it hard to interpret and assess factually
  • It made bold claims with little evidence
  • It recommended readers spend time going through resources of questionable value

Someone called Encompass a hate group (which as a side note, it definitely is not). The Anima Executive Director in question liked this comment.

You bring this up a few times in your comment. Personally I give the ED the benefit of the doubt here because the comment in question also said “what does this have to do with helping animals" which is a point the ED makes elsewhere in the thread, so it’s possible that they were agreeing with this part of the comment as opposed to the ‘hate group’ part. I can’t be sure of course, but I highly doubt the ED genuinely agrees that Encompass is a hate group given their other comments in the thread seeming fairly respectful of Encompass including “it's not really about animal advocacy, it's about racial injustice and how animal advocates can help with that. That's admirable of course, I just don't think it's relevant to this group”.

This was a red-flag to ACE (and probably should have been to many people), since the ED had both liked some pretty inflammatory / harmful statements, and was speaking on a topic they clearly had both very strong and controversial views on, regarding which they had previously picked fights on.

You seem to imply that others should have withdrawn from the conference too, or at least that they should have considered it? This all gets to the heart of the issue about free speech and cancel culture. Who decides what’s acceptable and what isn’t? When is expressing a different point of view just that vs. "picking a fight". Is it bad to hold "strong and controversial views"?

People were certainly affected by the ED’s comments, but people are affected by all sorts of comments that we don’t, and probably shouldn't, cancel people for. People will be affected by your comment, and people will be affected by my comment. When talking about contentious issues, people will be affected. It’s unavoidable unless we shut down debate altogether. You imply that the ED's actions were beyond the pale, but we need to realise that this is an inherently subjective viewpoint and it's clearly the case that not everyone agrees. So whilst ACE had the right to withdraw, I'm not sure we can imply that others should have too.

Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration.

I don't find your comment to have much in the way of argument as to why it might be bad if papers like this one become more widespread. What are you actually worried would happen? This isn't super clear to me at the moment.

I agree a paper that just says "we should ignore the repugnant conclusion" without saying anything else isn't very helpful, but this paper does at least gather reasons why the repugnant conclusion may be on shaky ground which seems somewhat useful to me.

Confusion about implications of "Neutrality against Creating Happy Lives"

My short answer is that 'neutrality against creating happy lives' is not a mainstream position in the EA community. Some do hold that view, but I think it's a minority. Most think that creating happy lives is good.

On the longtermist case for working on farmed animals [Uncertainties & research ideas]

Thanks for writing this Michael, I would love to see more research in this area. 

Thus, it seems plausible that expanding a person’s moral circle to include farm animals doesn’t bring the “boundary” of that person’s moral circles any “closer” to including whatever class of beings we’re ultimately concerned about (e.g., wild animals or artificial sentient beings). Furthermore, even if expanding a person’s moral circle to include farm animals does achieve that outcome, it seems plausible that that the outcome would be better achieved by expanding moral circles along other dimensions (e.g., by doing concrete wild animal welfare work, advocating for caring about all sentient beings, or advocating for caring about future artificial sentient beings).[2] 

This is definitely an important point.

This is very speculative, but part of me wonders if the best thing to advocate for is (impartial) utilitarianism. This would, if done successfully, expand moral circles across all relevant boundaries including farm animals, wild animals and artificial sentience, and future beings. Advocacy for utilitarianism would naturally include "examples", such as ending factory farming, so it wouldn't have to be entirely removed from talk of farmed animals. I'm quite uncertain if such advocacy would effective (or even be good in expectation), but it is perhaps an option to consider.

(Of course this all assumes that utilitarianism is true/the best moral theory we currently have).

Possible misconceptions about (strong) longtermism

To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first place - where we can't really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when they're trying to do the most good.

Maybe we're also complexy clueless about what day to conceive a child on, or which chair to sit on, but we don't really have our "EA hat on" when doing these things. In other words, I'm not having a child to do the most good, I'm doing it because I want to. So I guess in these circumstances I don't really care about my complex cluelessness. When giving to charity, I very much do care about any complex cluelessness because I'm trying to do the most good and really thinking hard about how to do so.

I'm still not sure if I would class myself as complexly clueless when deciding which chair to sit on (I think from a subjective standpoint I at least feel simply clueless), but I'm also not sure this particular debate really matters.

Possible misconceptions about (strong) longtermism

So far, I feel I've been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it's the sort of thing that would never happen in real life, or the information given is less than one would have in real life).

I think simple cluelessness is a subjective state.  In reality one chair might be slightly older, but one can be fairly confident that it isn't worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn't seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesn't seem too far-fetched to me (let's say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).

I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other - otherwise they fall prey to  paralysis. Admittedly I haven't read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.

EDIT: To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we can't really form precise credences. 

FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone - who've thought about the topic think the cluelessness stuff is much more useful than I think it is

For me, Greaves' work on cluelessness just highlighted to me a problem I didn't think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they don't discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that don't fall prey to problem, which might be the longtermist ones that Greaves puts forward (although I'm uncertain about this).

Possible misconceptions about (strong) longtermism

Your critique of the conception example might be fair actually. I do think it's possible to think up circumstances of genuine 'simple cluelessness' though where, from a subjective standpoint, we really don't have any reasons to think one option may be better or worse than the alternative. 

For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesn't seem to be any point stressing about this decision (assuming there isn't some obvious consideration to take into account), although it is certainly possible that choosing the left chair over the right chair could be a terrible decision ex post. So I do think this decision is qualitatively different to donating to AMF. 

However I think the reason why Greaves introduces the distinction between complex and simple cluelessness is to save consequentialism from Lenman's cluelessness critique (going by hazy memory here). If a much wider class of decisions suffer from complex cluelessness than Greaves originally thought, this could prove problematic for her defence. Having said that, I do still think that something like working on AI alignment probably avoids complex cluelessness for the reasons I give in the post, so I think Greaves' work has been useful.

Possible misconceptions about (strong) longtermism

Thanks for all your comments Michael, and thanks for recommending this post to others!

I have read through your comments and there is certainly a lot of interesting stuff to think about there. I hope to respond but I might not be able to do that in the very near future.  

I'd suggest editing the post to put the misconceptions in the headings in quote marks

Great suggestion thanks, I have done that.

The Epistemic Challenge to Longtermism (Tarsney, 2020)

Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I don't think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper. 

So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).

Load More