Alex HT

Comments

Is there evidence that recommender systems are changing users' preferences?

 Is The YouTube Algorithm Radicalizing You? It’s Complicated.

Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.

Confusion about implications of "Neutrality against Creating Happy Lives"

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

Response to Phil Torres’ ‘The Case Against Longtermism’

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?


Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

Response to Phil Torres’ ‘The Case Against Longtermism’

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

Should I transition from economics to AI research?

There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)

Running an AMA on the EA Forum

Agree with Marisa that you'd be well suited to do an AMA

How can non-biologists contribute to wild animal welfare?

Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.

Were the Great Tragedies of History “Mere Ripples”?

Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.

At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. 

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists). 

I'm also not sure that lots of longtermists (even of the Bostrom/hinge of history type) would agree that the quoted claim accurately represent their views

 our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.”

But, I do agree that some longtermists do think 

  • there are likely to be very transformative events soon eg. within 50 years
  • in the long run, if they go well, these events will massively improve the human condition 

And there's some criticisms you can make of that kind of ideology that are similar to the criticisms the author makes. 

Ecosystems vs Projects in EA Movement Building

from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing 

We are not actively focusing on:

...

  • Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
Were the Great Tragedies of History “Mere Ripples”?

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themselves.  And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.

Clarity

In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism. 

Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!

I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else. 

The thesis of the book (for people reading this comment, and to check my understanding)

“Longtermism is a radical ideology that could have disastrous consequences if the wrong people—powerful politicians or even lone actors—were to take its central claims seriously.”

“As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe.”

Utilitarianism (Edit: I think Tyle has added a better reading of this section below)

  • This section seems to caution against naive utilitarianism, which seems to form a large fraction of the criticism of longtermim. I felt a bit like this section was throwing intuitions at me, and I just disagreed with the intuitions being thrown at me. Also, doing longtermism better obviously means better accounting for all the effects of our actions, which naturally pushes away from naive utilitarianism
  • In particular, there seems to be a sense of derision at any philosophy where the ‘means justify the end’. I didn't really feel like this was argued for (please correct me if I'm wrong!)
  • I don’t know whether that meant the book was arguing against consequentialism in general, or arguing that longtermism overweights consequences in the longterm future compared to other consequences, but is right to focus on consequences generally
  • I would have preferred if these parts of the book were clear about exactly what the argument was
  • I would have preferred if these parts of the book did less intuition-fighting (there’s a word for this but I can’t remember it)

Millennialism

  • “A movement is millennialist if it holds that our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.” (pg.24 of the book)
  • Longtermism does not say our current world is replete with suffering and death
  • Longtermism does not say the world will be transformed soon
  • Longtermism does not say that if the world is transformed it will be into a world of justice, peace, abundance, and mutual love.
  • Therefore, longtermism does not meet the stated definition of a millennialist movement
  • Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

Mere Ripples

  • Some things are bigger than other things
  • That doesn’t mean that the smaller things aren’t bad or good or important- they are just smaller than the bigger things
  • If you can make a good big thing happen or make a good small thing happen you can make  more good by making the big thing happen
  • That doesn't mean the small thing is not important, but it is smaller than the big thing
  • I feel confused

White Supremacy

  • The book quotes this section from Beckstead’s Thesis:

Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

The book goes on to say:

In a phrase, they support white supremacist ideology. To be clear, I am using this term in a technical scholarly sense. It denotes actions or policies that reinforce “racial subordination and maintaining a normalized White privilege.” As the legal scholar Frances Lee Ansley wrote in 1997, the concept encompasses “a political, economic and cultural system in which whites overwhelmingly control power and material resources,” in which “conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.”

On this definition, the claims of Mogensen and Beckstead are clearly white supremacist: African nations, for example, are poorer than Sweden, so according to the reasoning above we should transfer resources from the former to the latter. You can fill in the blanks. Furthermore, since these claims derive from the central tenets of Bostromian longtermism itself, the very same accusation applies to longtermism as well. Once again, our top four global priorities, according to Bostrom, must be to reduce existential risk, with the fifth being to minimize “astronomical waste” by colonizing space as soon as possible. Since poor people are the least well-positioned to achieve these aims, it makes perfect sense that longtermists should ignore them. Hence, the more longtermists there are, the worse we might expect the plight of the poor to become.

  • I'm pretty sure the book isn't using 'white supremacist' in the normal sense of the phrase. For that reason, I'm confused about this, and would appreciate answers to these questions
    • The Beckstead quote ends ‘other things being equal’. Doesn't that imply that the claim is not 'overall, it's better to save lives in rich countries than poor countries' but 'here is an argument that pushes in favour of saving lives in rich countries over poor countries'?
    • Imagine longtermism did imply helping rich people instead of helping poor people, and that that made it white supremacist. Does that mean that anything that helps rich people is white supremacist (because the resources could have been used to help poor people)?
      • What if the poor people are white and the rich people are not white?
      • Why do  rich-nation government health services not meet this definition of white supremacy?
  • I'd also have preferred if it was clear how this version of white supremacy interfaces with the normal usage of the phrase

Genocide (Edit: I think Tyle and Lowry have added good explanations of this below)

  • The book argues that a longtermist would support a huge nuclear attack to destroy everyone in Germany if there was a less than one-in-a-million chance of someone in Germany building a nuclear weapon. (Ch.5)
  • The book says that maybe a longtermist could avoid saying that they would do this if they thought that the nuclear attack would decrease existential risk
  • The book says that this does not avoid the issue though and implies that because the longtermist would even consider this action, longtermism is dangerous (please correct me if I’m misreading this)
  • It seems to me that this argument is basically saying that because a consequentialist weighs up the consequences of each potential action against other potential actions, they at least consider many actions, some of which would be terrible (or at least would be terrible from a common-sense perspective). Therefore, consequentialism is dangerous. I think I must be misunderstanding this argument as it seems obviously wrong as stated here. I would have preferred if the argument here was clearer
Load More