L

Lowry

29 karmaJoined Feb 2021

Bio

Natural lurker, reluctant contributer and looking to overcome that. Real name is 'Sam', but I was too shy to give my first name when creating the account (and 'Lowry' isn't my second name!) Currently working in a very non-EA field but looking to 'break in'. I'm just happy to be here really :)

Comments
5

I had left this for a day and had just come back to write a response to this post but fortunately you've made a number of the points I was planning on making.

I think it's really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .  

OP made it clear that he doesn't agree with a number of Nick Bostrom's opinions but I wasn't entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main disagreement lay. I wasn't sure if it whether OP was disagreeing with: 

  1. That there was a theoretical case to be made for orienting our actions with a view to the long term future/placing a high value on future human potential
  2. High profile longtermists' subsequent inferences based on longtermist values and/or the likelihoods they assign to achieving 'high human flourishing'/transhumanist oucomes (ie. we should place a much lower probability on realising these high-utility futures and therefore many longtermist arguments are weakened)
  3. The idea that longtermism can work as a practical guide in reality (ie. that longtermism may correctly identify the 'best' actions to take but due to misinterpretation and 'slippery slope' factors it acts as an information hazard and should therefore be avoided)

Re your response to the 'Genocide' section Alex: I think Phil's argument was that longtermism/transhumanist potential leads to a Pascal's mugging in this situation where very low probabilities of existential catastrophe can be weighted as so undesirable that they justify extraordinary behaviour (in this case killing large numbers of individuals in order to reduce existential risk by a very small amount). This doesn't seem to me to be an entirely ridiculous point but I believe this paints a slightly absurd picture where longtermists do not see the value in international laws/human rights and would be happy to support their violation in aid of very small reductions in existential risk. 

In the same way that consequentialists see the value in having a legal system based on generalised common laws, I think very few longtermists would argue for a wholesale abandonment of human rights.

As a separate point: I do think the use of 'white supremacist' is misleading, and is probably more likely to alienate then clarify. I think it could risk becoming a focus and detracting from some of the more substantial points being raised in the book.

I thought the book was an interesting critique though and forced me to clarify my thinking on a number of points. Would be interested to hear further.
 

Mm, I can certainly see the temptation to lean towards 'nuclear weapons likely don't actually work as deterrents' if one didn't have a strong conviction in the other direction. 

I was under the impression that Beatrice seemed to be tentatively arguing that maintaining any sort of nuclear weapons capability would make an individual nation less safe from attack, but looking at the transcript again I think there is some potential ambiguity that means I could  be mistrepresenting her postition.

Would be very interested to hear a more fleshed-out argument though.

I've had the book on my to-read list for ages, but it's got so much company (including 'The Strategy of conflict', after reading this) that its odds aren't looking good!

I found your first criticism especially interesting:

At times, Kaplan seems to sort-of dismiss the whole idea that nuclear weapons could have any value (from a country’s perspective) via helping to deter an adversary from taking a disliked action

I listened to a future of life institute podcast with Beatrice Fihn (director of the International Campaign to Abolish Nuclear Weapons) recently, where she seemed to be making the similar point that individual countries might not be made any safer by maintaining a nuclear arsenal because it unwittingly turns them into a target for other countries. 

It's a theory that seems fairly unintuitive to me - would North Korea's leadership really have had such stability without being able to leverage a nuclear threat?

I'd love to know what motivated Kaplan's argument (assuming you were understanding him correctly). It would be lovely to find out that disarmament wasn't actually as much of a coordination problem as we though it was!

Hi everybody! A slippery slope from 80,000 hours podcasts has led me to this lovely community. Probably like a lot of people here I've been EA-sympathetic for a long time before realising that the EA community was a thing. 

I'm not in a very 'EA-adjacent' job (if that's the term!) at the moment and am starting to think about areas in which I might enjoy working, where I would value the work being done and feel that I was really contributing value myself. 

Very excited to start my journey of engaging more directly with all of you and the discussions being had here :)