Matt Goodman

609Joined Jan 2020


Your comment made me realise I'm actually talking about two different things:

  • When you can choose to end the pain at any point e.g.  exercise, the hand-in-cold-water experiment.
  • When you can't choose to end the pain, but you know that it will end soon with some degree of certainty. e.g. "medics will be here with morphine in 10 minutes", or "we can see the head, the baby's almost out".

I agree with you that having some kind of peer pressure or social credit for 'doing well' can help a person withstand pain. I'd imagine this has an effect on the hand-in-cold-water experiment, if you're doing it on your own vs as part of a trial with onlookers.

sorry, I got your name wrong in my reply (changed now)! I'm going to look into my question further, and read some of you linked to. That's as a result of this post:)

I went through these experiences voluntarily and with the knowledge that I have the freedom to stop whenever I want. People suffering from painful disease, children dying of hunger, chickens being electrocuted to death, fish being asphyxiated to death - for these individuals, such experiences are a horrific reality, not an experiment

I think this is a very important distinction that should be given more emphasis. When I've experienced severe pain, the no.1 thought in my mind was "oh god make it stop". This makes complete sense if you think of pain as your body's way of saying, "ok, whatever it is you're doing, you need to stop doing it now." And I think a lot of the psychological suffering I experienced  was due to the stress of not being able to stop the thing that was causing pain, and not knowing how long the pain would go on for. I add the word 'psychological' for clarity here, but in reality I don't think there's a clear difference between  'psychological' and 'physical' sources of pain. All pain in a sense is psychological - all of it happens 'in your mind', and factors such as knowing the pain will end soon can have a big effect on the experience of pain.

This distinction could also have a big effect on how people rate their pain on the pain-track framework. The framework seems to define pain a lot in terms of 'how long could a person endure this?' And that answer probably varies a lot depending on whether you know the pain will go away soon, or not. For 'disabling' pain, it could literally be less disabling, if you knows it's going to end soon. You might think something like, "ok, I know this will end in 5 minutes, for now I'm going to do this other job to distract myself". And looking back at the experience, and your behaviour at the time, you might read the scale, and think "ok it's wasn't that disabling, I could still do stuff".  

Hey Ren, this is a great post!

I share your intuition that reducing extreme suffering is the no.1 moral  imperative for humankind.

 What charities do you recommend, if that's what you value most? GiveWell recommended charities based on their own moral weights, which I don't think weight as reducing extreme suffering as highly as me.

Then there's many animal welfare  charities. And there's OPIS, which is the only charity I know that explicitly targets extreme human suffering. Are there any others that I'm missing?

My guess is that it wouldn't change much

Maybe not for most people reading the people reading the EA forum. I think if you take a serious look at the issues of animal suffering and farmed animal conditions, you'll probably  arrive at a number similar to  existing statistics on numbers of factory farmed animals.

But I think there's plenty of people who have motivated reasoning to doubt those statistics, or minimise the badness/factory-ness of a farm, or farming practice. For example, my extended family run a dairy farm. I remember when first reading about factory farms thinking 'well, the family farm isn't like these factory farms... right? '

I also think it's possible animal agriculturists will seize on uncertainty around the term 'Factory Farm' to sow confusion and whitewash animal welfare issues. Suppose that in the future, the concept of 'Factory Farms' gains widespread public vilification, in the same way that 'Fossil Fuels' does now. Now imagine a pan-European animal agriculture lobby group seizes on the looseness of the term 'Factory Farm' to ensure European farms aren't associated with it:

 European farms aren't Factory Farms! We have better animal welfare standards here. There are cage-free policies here! Animal welfare laws! Standards and checks! It's only farms outside of Europe that are factory farms, those are the ones that should be counted in the statistics, not European farms! 

I don't see this as "economic or moral incentive to sit on the borderline" but rather 'if forced to adhere to higher welfare standards, there's an incentive to maximise the reputational gain from this'.

edit: added last paragraph

Why aren't we protesting AI  acceleration in the street?

I'm not super up to date with the latest EA thinking on current AI capabilities. The takes I read on social media from Yudkowsky and the like are something along the lines of 'We're at a really dangerous time, various companies are engaged in arms race to make more and more powerful AIs with little regard to safety, and this will directly lead to humanity being wiped out by AGI in the near future'. For people really believe this to be true (especially if you live in San Fransico) - why aren't you protesting on the street? 

Some reasons this might work:

  • There's lots of precedents of public pressure leading to laws being passed or procedures changed, that have increased safety standards across many industries
  • The companies working on AI alignment are based in San Francisco. There's a big EA and rationalist community in SF. Protests could happen outside the HQ of AI companies.
  • Stories about silicon valley tech companies get lots of press coverage in mainstream media
  • There's a prevailing anti - big tech companies feeling in parts of society that could be tapped into it
  • Specifically, there's criticisms of the newest AIs for things like 'training AI models on artists' work, then putting artists out of a job' (Dalle) or 'making it much easier to cheat at university' (ChatGPT). Whilst this isn't directly related to AGI safety, it's the kind of feeling that could be tapped into for the purpose of this protest
  • If an AI safety researcher could be interviewed on camera at the march it adds credibility to the march, that experts are concerned
  • It adds credibility to the voices of experts warning about AI risk, if they're so worried they're willing to get out on the street to protest about it 

I feel uncomfortable with this kind of public character judgement of an alleged victim. Especially when it's presented without a source or evidence backing up the claim she's 'hella scary'

maybe 'social-justice-caring left' is a better term

I think using the term 'woke left' will be counter-productive to your aim of reaching out to politically left people. While 'woke' started as a term used by the left, I now see it being used almost exclusively by the right as a pejorative term for the left, and most politically left people I know would be annoyed at being called 'woke'.

What would that add? I think that would add speculation on to what is already speculation, and I'd think only the passing of time would be able to give feedback on whether the predictions turn out to be true. 

I guess it could give more information, if you sought out different people for the meta-predictions, than had made the original predictions. But then I'm not sure why you wouldn't just have these new people do the original prediction questions directly. 

Load more