1349 karmaJoined Mar 2020Copenhagen, Denmark


One consideration worth examining is the historically complex dynamic between MSF and organizations such as GAVI. A revealing example was when MSF launched an advocacy campaign that prompted a critical response from Bill Gates, who as a major GAVI funder voiced concerns about MSF’s approach:

> "I think there is an organisation that's wonderful in every other respect, but every time we raise money to save poor children's lives, they put out a press release that says the price of these things should be zero". He said that criticizing pharmaceutical company pricing deterred them from investing in medicines for the developing world, and said that instead, pharmaceutical companies should be praised for price descrimination: "We get a great price for these things, which is tiered pricing... And that's how we manage to cut childhood death in half".  [1]

For anyone curious about this debate, I'd recommend reading the full article

  1. ^

    Citing from wiki: https://en.wikipedia.org/wiki/GAVI

    Which cites from: https://www.theguardian.com/global-development/2015/jan/27/bill-gates-dismisses-criticism-of-high-prices-for-vaccines

We have strong evidence that this has played out in previous decisions to develop lifesaving products (see Advanced market commitments). At the very least there's a strong burden of proof on this campaign to prove they are not exacerbating this stressor.

Bill Gates has made this exact point on a previous debate with MSF in 2015: This general thing where organisations come out and say, ‘hey, why don’t vaccines cost zero?’ – all that does is that you have some pharma companies that choose never to do medicines for poor countries because they know that this always just becomes a source of criticism. So they don’t do any R&D [research and development] on any product that would help poor countries. Then they’re not criticised at all because they don’t have anything that these people are saying they should price at zero[1]

Edit: Added second paragraph

  1. ^


I guess the question is how much of the research was subsidised.

Isn't the question: "How will this campaign effect the likeliness of future innovations". I don't see an immediate connection between this question and how much the R&D of the machine was subsidised. 

- I’m somewhat skeptical of the value of option value, because it assumes that humans will do the right thing.

I'd argue there's a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that you're not the most informed, most capable person to make that decision. 

To intuition pump, this is a (good imo) reason why doctors recommend young people wait before getting a vasectomy. That person is able to use other forms of contraception whilst they hand that decision to someone that might be better informed (i.e. themselves in 10 years time).

Because of physical ageing, we don't often encounter option value in our personal lives. It's pretty common for choices available to us to be close to equal on option value. But this isn't the case when we are taking decisions that have implications on the long term future. 

But I do think that the likelihood and scale of certain “lock-in” net negative futures, could potentially make working on s-risk directly or indirectly more impactful.

To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario. 


I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium? 

I gave this a read through, and then asked claude to summarise. I'm curious how accurate you find the following summary: 

  1. Wild animal suffering is a more pressing cause area than existential risk prevention according to total utilitarianism and longtermism. This is a central thesis being argued.
  2. Humans will likely be vastly outnumbered by wild animals, even if humans spread to space. This is based on the vast number of wildlife on Earth currently and the potential for them to also spread to space.
  3. Any human or animal suffering before an existential catastrophe would be negligible compared to the potential future suffering prevented by reducing existential risk. This is a core longtermist assumption.
  4. It may be possible to make "permanent" changes to wild animal welfare by developing interventions and spreading concern for wild animal suffering now. The author argues this could influence how animals are treated in the far future.
  5. Both x-risk prevention and wild animal welfare are highly neglected areas compared to their importance. This informs the tractability estimates.
  6. Personal fit should be a major factor in cause prioritization given the high uncertainty in these long-term forecasts. This is why the author recommends staying in your cause area if you already have a good fit.
  7. The future could plausibly be net negative in expectation according to the author's assumptions. This makes reducing existential risk potentially net negative.
  8. The spread of wildlife to space is likely and difficult to reverse, making wild animal welfare interventions potentially high impact. This informs the tractability of permanently improving wild animal welfare.
  9. Various assumptions about the probabilities and potential scales of different existential and suffering risks, which inform the expected value calculations.

To broadly summarise my thoughts here:

● I agree with others that it's really great to be thinking through your own models of cause prio
● I am sceptical of S-Risk related arguments that point towards x-risk mitigation being negative. I believe the dominant consideration is option value. Option value could be defined as "keeping your options open in case something better comes up later". Option value is pretty robustly good if your unsure about a lot of things (like whether the long-term future is filled with good or bad). I suspect X-risk mitigation preserves option value particularly well. For that reason I prefer the phrase "preventing lock-in" vs "preventing x-risk", since we're focusing our sights on option value.
● Why should somebody suffering focused want to preserve option value? Even if you assume the long-term future is filled with suffering, you should have uncertainty factored inside and outside your model - which leads you wanting to extend the amount of time and people working on the problem, and the options they have to choose from.

Also, I agree with @Isaac Dunn that it's really great you're thinking through your own cause prioritisation. 

Said very quickly: I will defer to the EA Forum team on this. If anybody here was asked to comment by nonlinear, please let the forum team know so they can create decisions and norms around this. You can send a message to Lizka (choosing Lizka, because she's the most well recognised/trusted, you can also contact other members of the moderation team). 

I am not sure if you're arguing: (1) that this is not brigading or (2) even if it is brigading, brigading is not detrimental. But I can go through both

(1) There's been limited discussion on the EA forum about the concept of brigading, that mostly focused on "vote brigading". But if I point to a website that's more experienced in brigading being used to distort discussion, reddit, they consider leaving comments to be brigading: 

A term that originated on Reddit, Brigading is when a group of users, generally outsiders to the targeted subreddit, "invade" a specific subreddit and flood it with downvotes in order to damage karma dynamics on the targeted sub; spam the sub with posts and comments to further their own agenda; or perform other coordinated abusive behaviour such as insulting or harassing the subreddit’s users in order to troll, manipulate, or interfere with the targeted community.

(Bold is mine) I suspect most with reddit mod or admin experience would consider what happened to be brigading, because: A) You're over-representing a certain opinion. One might want to use the ratio of comments to quickly determine who is right/wrong, comment brigading distorts this metric. B) You're increasing the flow of team A people, on a team B post, leading to distorted voting, distorted replies (and as discussed, distorted comments) 

You discuss whether it's acceptable for different actors to engage in coordinated comment posting. I'd argue, and wager that the EA forum team agrees, that it's pretty much always unacceptable to engage in concealed coordinated forum engagement. 

Would anyone accuse me of brigading if I theoretically knew other people who had negative experiences with Nonlinear and asked them to chime in?

Yes. Please disclose if you ever do anything like this. It's absolutely brigading. 

(2) I have a few key areas of disagreement with this angle 

A) the positive comments being left, are largely irrelevant. If the claim is "Kat encouraged me to drive without a license" then no amount of "I have had great experiences with Kat at EAGs" is relevant. 

B) Early on, these comments had the potential to set the trajectory of discussion. If you have a model of the forum, where everybody shares perspectives unbiasedly and confidently, then you might be surprised to hear this. But most users try to "read the room" before commenting, and will be less likely to comment if they're saying something controversial. 

C) Most users try to determine what's true and what's false by reading the overall total valance of comments. I agree, this is not a great way for determining what is and is not true, but it's a reality. Manipulating the ratio of valanced comments is unhelpful for this reason. 

Just to finish on a question, if Kat had asked N number of people to chime in, at what point would you think it's excessive? I.e. I assume we'd agree, if she asked 400 people to chime in, that would be excessive. But what is the minimum number whereby you feel this would be excessive? 

On December 15, as your screenshots seem to illustrate that you were not able to provide her hot (vegan) food, despite >2 hours of text messaging with both Kat and Drew.

Just to identify a crux, do you think it's acceptable that someone in your duty of care doesn't eat hot food for a day whilst they are sick? 

I've confirmed with a commenter here, whom left a comment positive of non-linear, that they were asked to leave that comment by nonlinear. I think this is low-integrity behaviour on behalf of nonlinear, and an example of brigading. I would appreciate the forum team looking into this. 

Edit: I have been asked to clarify that they were encouraged to comment ’by nonlinear, rather than asked to comment positively (or anything in particular).

Considering these accusations (in some form or another) have been out for longer than a year, and non-linear has continued to be well respected by the community, I am worried that further "deadline pushing" only serves to launder nonlinear's reputation. I am suspicious of the idea that many who write for the need to "hear both sides" will indeed update if nonlinear's response is uncompelling. 

Load more