A

Alex319

82 karmaJoined Sep 2020

Comments
10

Answer by Alex319Apr 01, 202419
1
0
  1. You mentioned that one harm of insecticide-treated bed nets is that if people use them as fishing nets, that could cause harm to fish stocks. You say that GiveWell didn't take that into account in its cost-effectiveness calculations. But according to e.g. https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/, they did take that into account, they just concluded that the harm was very small in comparison to the benefits. Can you clarify what you meant when you say GiveWell didn't take that into account?
  2. If you're concerned so much about harm to fish stocks, do you think it would make more sense to focus your efforts on supporting charities focused on fish-related issues directly?
  3. GiveWell seems, by your admission,  to spend a lot of time thinking about second-order effects and possible harms of their preferred charities' interventions, and your criticism seems that even the amount they do is not sufficient. Okay, that seems fair enough. Do you think there are any charities or philanthropic efforts that do pay sufficient attention to the harms and second-order effects? Or do you think that all philanthropy is like this?
  4.  In particular, you talk about your friend Aaron, whose intervention you seem to like. Do you think Aaron thought about the second-order effects and harms of what he was doing? Do you think he's come up with a way of helping others that has less risk of causing harm, and if so is there a way to scale that up?
  5. If GiveWell were to take your advice and focus more on possible harms, is there a risk of overcorrecting, and spending lots of time and resources studying harms that are too small or unlikely to be worth the effort? (Some people think this has already happened in other contexts, e.g. some argue that excessive safety regulation of nuclear power that makes nuclear power plants very expensive to build, even though nuclear power is actually safer than other forms of power)

About this footnote:

============================

Carol Adams even informs us that:

Sebo and Singer flourish as academics in a white supremacist patriarchal society because others, including people of color and those who identify as women, are pushed down. (p. 135, emphasis added.)

Maybe treading on the oppressed is a crucial part of Singer’s daily writing routine, without which he would never have written a word? If there’s some other reason to believe this wild causal claim, we’re never told what it is.

=============================

Here's a potential more charitable interpretation of this claim. Adams might not be claiming:

"Singer personally performs some act of oppression as part of his writing process."

Adam's causal model might be more of the following:

"Singer's ideas aren't unusually good; there are lots of other people, including people of color and those who identify as women, who have ideas that are as good or better. But those other people are being pushed down (by society in general, not by Singer personally) which leaves that position open for Singer. If people of color and those who identify as women weren't oppressed, then some of them would be able to outcompete Singer, leaving Singer to not flourish as much."

Of course that depends on whether everyone else is also evacuating. For instance do we expect that if a tactical nuke is used in Ukraine a significant amount of the US population will be trying to evacuate? As has been mentioned before there was not a significant percentage of the US population trying to evacuate even during the Cuban Missile Crisis, and that was probably a much higher risk and more salient situation than we face now.

One thing that would be really useful in terms of personal planning, and maybe would be a good idea to have a top level post on, is something like:

What is P(I survive | I am in location X when a nuclear war breaks out)

for different values of X such as:

(A) a big NATO city like NYC

(B) a small town in the USA away from any nuclear targets

(C) somewhere outside the US/NATO but still in the northern hemisphere, like Mexico. (I chose Mexico because that's probably the easiest non-NATO country for Americans to get to)

(D) somewhere like Argentina or Australia, the places listed as being most likely to survive in a nuclear winter by the article here https://www.nature.com/articles/s43016-022-00573-0

(E) New Zealand, pretty much where everyone says is the best place to go?

Probably E > D > C > B > A, but by how much?

As others have said, even (B) (with a suitcase full of food and water and a basement to hole up in) is probably enough to avoid getting blown up initially, the real question is what happens later. It could be that all the infrastructure just gets destroyed, there's no more food, and everyone starves to death.

Of course another thing to take into account is that if I just decide to go somewhere temporarily and there's a war, I'll be stuck somewhere that's unfamiliar, where I may not speak the local language, and where I am not a citizen. Whether that is likely to affect my future prospects is unclear.

If it turns out that we'll be fine as long as we can survive the bombs and the fallout, that's one thing. But if we'll just end up starving to death unless we're in the Southern Hemisphere, then that is another thing.

(Does the possibility of nuclear EMP (electromagnetic pulse) attacks need to be factored in? I've heard claims like 'one nuke detonated in the middle of the USA at the right altitude would destroy almost all electronics in the USA', and maybe nearby countries would also be in the radius. If true, likely it would happen in a nuclear war. And of course that would also have drastic implications for survivability afterward. I don't know how reliable this is, though.)

Another important question is "how much warning will we have?" Even a day or two's worth of warning is enough to hop on the next flight south, but certainly there are some scenarios where we won't even have that much.

This was really helpful. I'm living in New York City and am also making the decision about when/whether to evacuate, so it was useful to see the thoughts of expert forecasters. I wouldn't consider myself an expert forecaster and don't really think I have much knowledge of nuclear issues, so here's a couple other thoughts and questions:

- I'm a little surprised that P(London being attacked | nuclear conflict) seemed so low since I would have expected that that would be one of the highest priority targets. What informed that and would you expect somewhere like NYC to be higher or lower than London? (NYC does have a military base, Fort Hamilton (https://en.wikipedia.org/wiki/Fort_Hamilton), although I'm not sure how much that should update my probability).

- It seems like a big contributor to the lower-than-expected risk is the fact that you could wait to evacuate if the situation looked like it was getting more serious - i.e. the "conditional on the above, informed/unbiased actors are not able to escape beforehand" I don't have a car so I would have to get on a bus or plane out which might take up to a day, I'm not sure how much that affects the calculation as I don't know what time frame they were thinking of - were they assuming you can just leave immediately whenever you want?

- It sounds like it does make sense to be monitoring the situation closely and be ready to evacuate on short notice if it looks like the risk of escalation has increased (after all that is what the calculation is based on). Does anyone have any suggestions of what I should be following/under what circumstances it would make sense to leave?

- Of course another factor here is whether lots of other people would be trying to leave at the same time. This might make it harder to leave especially if you were dependent on a bus, plane, uber, etc. to get out of there.

- Another question is where do you go? For instance in NYC, I could go to {a suburb of NY / upstate NY / somewhere even more remote in the US like northern Maine / a non-NATO country} all of which are more and more costly but might have more and more safety benefit. Are there reliable sources on what places would be the safest?

For what it’s worth, while Facebook’s Forecast was met with some amount of skepticism, I wouldn’t say it was “dismissed” out of hand.

 

To clarify, when I made the comment about it being "dismissed", I wasn't thinking so much about media coverage as I was about individual Facebook users seeing prediction app suggestions in their feed  I was thinking that there are already a lot of unscientific and clickbait-y quizzes and games that get posted to Facebook, and was concerned that users might lump this in with those if it is presented in a similar way.

 

Yeah, they certainly would be reluctant to do that. But given that they already do fact-checking, it doesn’t seem impossible. 

I agree, and I definitely admit that the existence of the Facebook Forecast app is evidence against my view. I was more focused on the idea that if the recommender algorithm is based on prediction scores, that would mean that Facebook's choice of which questions to use would affect the recommendations across Facebook. 

I'm not an expert on social media or journalism, but just some fairly low-confidence thoughts - it seems like this is areally interesting idea, but it seems very odd to think of it as a Facebook feature (or other social media platform):

  • Facebook and social media in general don't really have an intellectual "brand". It seems likely that if you did this as a Facebook feature, it would be more likely to get dismissed as "just another silly Facebook game." Or if most of the people using it weren't putting much effort into it, the predictionslikely  wouldn't be that accurate, and that could undermine the effort to convince the public of its value.
  • The part about promoting people with high prediction scores seems awkward. Am I understanding correctly that each user is given one prediction score that applies to all their content? So that means that if someone is bad (good) at predicting COVID case counts, then if they post something else it gets down- (up-) weighted, even if the something else has nothing to do with COVID? That's likely to be perceived as very unfair. Or do you have some system to figure out which forecasting questions count toward the recommender score for which pieces of content? Even then it seems weird - if someone made bad predictions about COVID in the past, that doesn't necessarily imply that content they post now is bad.
  • Presumably the purpose of this is to teach people how to be better forecasters. If you have to hide other people's forecasts to prevent abuse, then how are you supposed to learn by watching other forecasters? Maybe the idea is that Facebook would produce content designed to teach forecasting - but that isn't the kind of content that Facebook normally produces, and I'm not sure why we would expect Facebook to be particularly good at that.
  • All the comparisons between forecasting and traditional fact-checking are weird because they seem to address different issues; forecasting doesn't seem to be a replacement or alternative to fact-checking. For instance, how would forecasting have helped to fight election misinformation? If you had a bunch of prediction questions about things like vote counts or the outcomes of court cases, by the time those questions resolved everything would be already over. (That's not a problem with forecasting, since it's not intended for those kinds of cases. But it does mean that  it would not be possible to pitch this as an alternative to traditional fact-checking.)
  • In general, this seems to require a lot of editorial judgment on the part of Facebook as to what forecasting questions to use and what resolution criteria. (Especially this would be an issue if you were to use a user's general forecasting score as part of the recommender algorithm - for instance, if Facebook included lots of forecasting questions about economic data, that would end up advantaging content posted by people who are interested in economics, while if the forecasting questions were about scientific discoveries instead, then it would instead advantage content posted by people who are interested in science.) My guess is that this sort of editorial role is not something that social media platforms would be particularly enthusiastic about - they were sort of forced into it by the misinformation problem, but in that case they mostly defer to reputable sources to adjudicate claims. While they could defer to reputable sources to resolve questions, I'm not sure who they would defer to to decide what questions to set up. (I'm assuming here that the platform is the one setting up the questions - is that the case?)
  • Another way to game the system that you didn't mention here: set up a bunch of accounts, make different predictions on each of them, and then abandon all the ones that got low scores, and start posting the stuff you want on the account that got a high score.

 

I wonder if it might make more sense to think of this as a feature on a website like FiveThirtyEight that already has an audience that's interested in probabilistic predictions and models. You could have a regular feature similar to The Riddler but for forecasting questions - each column could have several questions, you could have readers write in to make forecasts and explain their reasoning, and then publish the reasoning of the people who ended up most accurate, along with commentary.

You mention that:

Neither we nor they had any way of forecasting or quantifying the possible impact of [Extinction Rebellion]

and go on to talk about this is an example of the type of intervention that EA is likely to miss due to lack of quantifiability.

One think that would help us understand your point is to answer the following question:

If it's really not possible to make any kind of forecast about the impact of grassroots activism (or whatever intervention you would prefer), then on what basis do you support your claim that supporting grassroots activism would improve its impact? And how would you have any idea which groups or which forms of activism to fund, if there's no possible way of forecasting which ones will work?

I think the inferential gap here is that (we think that) you are advocating for an alternative way of justifying [the claim that a given intervention is impactful] other than the traditional "scientific" and "objective" tools (e.g. cost-benefit analysis, RCTs) , but we're not really sure what you think that alternative justification would look like or why it would push you towards grassroots activism.

I suspect that you might be using words like "scientific", "objective", and "rational" in a narrower sense than EAs think of them. For instance, EAs don't believe that "rationality" means "don't accept any idea that is not backed by clear scientific evidence," because we're aware that often the evidence is incomplete, but we have to make a decision anyway. What a "rational" person would say in that situation is something more like "think about what we would expect to see in a world where the idea is true compared to what we would expect to see if it were false, see which is closer to what we do see, and possibly also look at how similar things have turned out in the past."

A more charitable interpretation of the author's point might be something like the following:

(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.

(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention "give medication X to people who have condition Y" is easy to test with an RCT. However, the intervention "change the culture to make outdoor exercise seem more attractive" is much harder to test: it's harder to target cultural change to a particular area (and thus it's harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it's not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.

(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.

As for the question of "what do the authors consider to be root causes," here's my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:

(1) There's lots of demand for meat.

(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.

(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.

I suspect you and other EAs focus on item (2) when you are talking about "root causes." In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn't need to support them. They write:

if all investment was directed in a responsible way towards plant-based alternatives, and towards safe AI, would we need philanthropy at all

Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about "values of the old system" in this quote:

By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites.

As for the other quote you pulled out:

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

and the following discussion:

To be more concrete, I suspect what they're talking about is something like the following. Consider a potential philanthropist like Jeff Bezos - they likely believe that Amazon has harmed the world through their business practices. Let's say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:

(1) Donate $10 billion to worthy causes.

(2) Change Amazon's business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.

My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon's business practices are for the world.

---

Overall, though I agree with you that if my interpretation accurately describes the author's viewpoint, the article does not do a good job arguing for that. But I'm not really sure about the relevance of your statement:

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty? I didn't get that from the article; one of their main points is that it's important to try things even if success is uncertain.

Load more