SL

Sindy_Li

146 karmaJoined Dec 2016

Posts
1

Sorted by New

Comments
34

I appreciate you writing up these comments! There are some great suggestions here as well as things I disagree with. As the author of the "extremely positive" post let me share some thoughts. (I'm by no means an expert on this so feel free to tell me I'm wrong.)

1. Quantitative cost-effectiveness analysis

Summary of my view: I'm pretty torn on this one but think we may not want to require a quantitative CEA on charities working on policy change (although definitely encourage the GG team to try this exercise).

On one hand I think it's great to at least attempt it to develop a better understanding of one's causal model as well as sources of uncertainty, and getting a ballpark estimate if possible (though sometimes the range is too wide to be useful). On the other hand requiring quantitative cost-effectiveness estimates can restrict the type of charities one can evaluate. I took a brief look at Founders' Pledge's model on the Clean Air Task Force, which seems to be a combination of 1) their track record, 2) their plan, 3) subjective judgements. While the model seems reasonable (I haven't taken a deep enough look to tell how much I agree) I do think requiring such a model would preclude evaluating orgs like the Sunrise Movement -- or, if we take your concerns about them seriously (which I'll address below) let's just say orgs like that, of which there are many in the climate space: those with a more complex theory of change than say CATF, and any model would involve inputs that are mostly extremely subjective (compared to the CATF one) which makes it less meaningful. Perhaps you would say these orgs are precisely the ones not worth recommending -- on this I agree with Giving Green that we should hedge our bets among different theories of change and hence look at different types of orgs (even though as I'll elaborate later I agree with having a stronger recommendation for CATF, I think potentially recommending orgs more similar to TSM is valuable).

So I think it is definitely good to attempt a quantitative CEA and I highly encourage the GG team to do so, even for an org like TSM. (I would have liked to engage with Founders' Pledges' models more but didn't end up doing it -- that would be a nice exercise.) But I'm unsure about requiring that in a recommendation especially when you work in a space with so much uncertainty. (I was trying to look up other EA charity recommenders and saw that Animal Charity Evaluators also don't seem to have quantitative CEA for any/all their top charities -- I haven't checked all but here's an example without. Not saying this is a sufficient argument though.)

I have to say I'm pretty uncertain about how much to use quantitative CEA and I am happy to be convinced that I'm wrong.

I do agree Giving Green should communicate with less confidence in their recommendations as say GiveWell, which explicitly recommends charities that are amenable to be evaluated with higher quality evidence (e.g. RCTs) and hence have lower uncertainty.

2. Offsets

1) Offsets vs policy change

My read is that GG recommends offsets because they see a huge market especially among companies that want to purchase offsets, and it's hard to convince them to instead donate the money to the maximally impactful thing. However, I agree that they should communicate this more clearly: that for more "flexible" donors they strongly recommend policy change over offsets.

2) Cost-effectiveness of offsets

I agree it would be good to come up with cost-effectiveness estimates for offsets even though they will also be pretty uncertain (probably something between the uncertainty of GiveWell current top charities and climate change orgs working on policy change). In addition to telling people to buy offsets with real additionality, it's probably also good to put a proper price tag on things especially if they differ a lot.

3. The Sunrise Movement (TSM)

Summary of my view: I'm more positive than the author on the impact they achieved (and perhaps their impact potential), and less negative on the potential for negative impact, although I'm really unsure about it as I'm far from an expert. I do agree that GG should recommend CATF more highly than TSM.

Impact they achieved: The fact that Biden and some other Democrats adopted climate change plans similar to what's proposed by TSM (see the "Policy consensus and promotion" section of GG's page on TSM) is some evidence of their influence, although of course we can't be sure. (This article argues it was valuable for groups on the left to have a more unified framework for addressing climate change, and it seems like TSM is one of the multiple groups that had an influence in the process.)

Potential for negative impact:

  • In terms of actual policies: I mostly trust Biden and overall the Democratic members of Congress (rather than the most "progressive" ones) to go for policies that will be less polarizing than the most radical proposals, and I'm not too worried about TSM pressuring them into doing things they don't think are good ideas.
  • In terms of public opinions: Will TSM make climate change a more polarizing issue than it already was? On one hand we do see the majority of Americans being concerned about climate change; on the other hand the extreme level of polarization (even in the absence of the TSM) already shape people's view on many things. so I'm not sure.
  • (I think my arguments are pretty weak here though because I don't understand the US political system very well.)

Why GG should recommend CATF more highly: 

  • Outside view perspective: even if the expected values of the two orgs look the same we should account for the fact that CATF has much more of a track record.
  • Inside view perspective: under the Biden administration it seems like CATF has a very clear vision of what they can do (see here); for TSM it's less clear -- even if they achieved some impact before the election in getting candidates to take climate change more seriously and adopt a more unified platform, it's less clear how they will influence policy now. If I were choosing between the two at this moment it's definitely CATF.
  • (Right now they sort of do this:  labeling CATF as "good bet" and TSM "shows promise", although we probably want something more clear than those labels, and apparently the team did not mean to recommend CATF more highly.)

Thanks for checking -- I think we can leave it here in case someone else missed it :)

Thanks Linch! It's in the last bullet point of the beginning "notes" section and also mentioned in the body of the doc.

Interesting. If most voters are in favor of cutting aid, AND this is clear to the MPs, then why would MPs have an incentive to vote against cutting aid?

  • One reason I can think of is if there is a well-organized interest group that, even though small in size, tries very hard to influence the MPs, leading them to help this group rather than the general population. (This seems to be the case in some areas in US policy.) In this case, you may want to create the impression of having a well-organized interest group -- which seems hard, but I wonder what strategies could help.
  • Another is that some MPs are personally against cutting aid and are willing to vote against it -- even though their voters favor cutting aid, voters don't care about this issue passionately and won't punish the MP much if they vote against. In this case, I wonder what strategies can persuade them.

Sorry if I come off as skeptical. I'm just thinking maybe thinking through the theory of change , incentives and the psychology of the MPs can help you refine your strategy, but no need to spend much time replying if you don't find this useful.

Related, I wonder if the emails are still a bit boilerplate as after seeing a few maybe the MP can tell how they were generated? I imagine there are people who know

  • generally what works best in influencing lawmakers / lobbying
  • specifically what works well in the UK

so would be curious what strategies they would propose.

(I wonder if something like doing an opinion poll of voters and presenting that info would help, but not sure how practical that is. Perhaps you could partner with someone already doing a poll / a major website or newspaper.)

Hey, I'm working on some research on the most impactful areas within ML-aided drug and vaccine discovery. I can share that with you once I'm done.

Thanks for sharing your experience! I'll share mine. I attended the workshop in July 2019 in California.

Like you, I also came in with the hope of becoming a hyper efficient rationality machine, overcoming problems like procrastination that I struggled with all my life. I was hoping to be taught how to use my System 2 to fight my lazy, uncooperative System 1 that always stood in the way of achieving my goals.

My biggest surprise was that the workshop was much more about understanding, working with, and leveraging your System 1. I was unconvinced and confused for quite a while, but more recently I finally realized that my existing way of constantly trying to force myself to do things I wasn't intrinsically excited about was not going to end well -- it was already resulting in a significant amount of unhappiness (which did not set me up for a sustainable, successful and impactful career path) until I noticed.

There were a number of major positive changed in my life in the past 1+ years and it's hard to say what role the CFAR workshop played, but I think it definitely played some role. For one thing, it made me aware for the first time of the possibility of working with, rather than against, my System 1, and even though I wasn't convinced by it for quite a while, it definitely triggered some discussions and reflections that eventually led to very productive rethinking and ultimately a different outlook. (E.g. I realized I had to constantly battling myself to get things done because I was on a career path that wasn't right for me. I'm not totally sure how my new career path would pan out but I think I'm definitely in a much better position to notice what I like and don't like, and switch gears accordingly.)

So if you are like me back then -- wanting to be super efficient and impactful but struggling with procrastinations and other barriers, and hoping to become a rational machine with no System 1 to distract oneself -- you should consider attending a CFAR workshop :) . It will not give you what you wanted but there's a good chance it will change your life in a positive way.

(I'd say overall it was a moderate positive effect, which was in line with what my friend told me the workshop did to them before I went. They also said one of the best things that came out of the workshop was that it caused them to get a therapist which turned out to be pretty useful. I also got a therapist after the workshop (prompted by factors other than the workshop) and I'd highly recommend considering therapy (and/or coaching) if you are struggling with issues in life you don't know how to solve, even if you consider yourself generally "mentally healthy".)

Thank you for your post! I am an IDinsight researcher who was heavily involved in this project and I will share some of my perspectives (if I'm misrepresenting GiveWell,  feel free to let me know!):

  • My understanding is GiveWell wanted multiple perspectives to inform their moral weights, including a utilitarian perspective of respecting beneficiaries/recipient's preferences, as well as others (examples here). Even though beneficiary preferences may not the the only factor, it is an important one and one where empirical evidence was lacking before the study, which was why GiveWell and IDinsight decided to do it.
    • Also, the overall approach is that, because it's unrealistic to understand every beneficiary's preferences and target aid at the personal level, GiveWell and we had to come up with aggregate numbers to be used across all GiveWell top charities. (In the future, there may be the possibility of breaking it down further, e.g. by geography, as new evidence emerges. Also, note that we focus on preferences over outcomes -- saving lives vs. increasing income -- rather than interventions, and I explain here why we and GiveWell think that's a better approach given our purposes.) 
  • My understanding is that ideally GiveWell would like to know children's preferences (e.g. value of statistical life) if that was valid (e.g. rational) and could be measured, but in practice it could not be done, so we tried to use other things as proxies for it, e.g.
    • Measuring "child VSL" as their parents/caretakers' willingness-to-pay (WTP)  to reduce the children's mortality (rather than own, which is the definition of standard VSL)
    • Taking adults' VSL and adjusting it by the relative values adults place on individuals of different ages (there were other ).
    • (Something else that one could do here is to estimate own VSL (WTP) to reduce own mortality as a function of age. We did not have enough sample to do this. If I remember correctly, studies that have looked at it had conflicting evidence on the relationship between VSL and age.)
    • Obviously none of these is perfect -- we have little idea how close our proxy is to the true object of interest, children's WTP to reduce own mortality -- if that is a valid object at all, and what to do if not (which gets into tricky philosophical issues). But both approaches we tried gave a higher value for children's lives than for adult lives so we conclude it would be reasonable to place a higher value on children's lives if donors'/GiveWell's moral weights are largely/solely influenced by beneficiaries. But you are right that the philosophical foundation isn't solid. (Within the scope of the project we had to optimize for informing practical decisions, and we are not professional philosophers, but I agree overall more  discussions on this by philosophers would be helpful.)
  • Finally, another tricky issue that came up was -- as you mentioned as well -- what to do with "extreme" preferences (e.g. always choosing to save lives). Two related questions that are more fundamental are
    • If we want to put some weights on beneficiaries' views, should we use "preferences" (in the sense of what they prefer to happen to themselves, e.g. VSL for self) OR "moral views" (what they think should happen to their community)? For instance, people seem to value lives a lot higher in the latter case (although one nontrivial driver of the difference is questions on the moral views were framed without uncertainty -- which was a practicality we couldn't get around, as including it in an already complex hypothetical scenario trading off lives and cash transfers seemed extremely confusing to respondents).
    • In the case where you want to put some weights on their moral views (and I don't think that would be consistent with utilitarianism -- not sure what philosophical view that is, but I think certainly not unreasonable to put some weight on it), what do you do if you disagree with their view? E.g. I probably wouldn't put weight on views that were sexist or racist; what about views that purport you should value saving lives above increasing income no matter the tradeoff?
    • I don't have a good answer, and I'm really curious to see philosophical arguments here. My guess is respecting recipient communities moral views would be appealing to some in the development sector, and I'm wondering what should be done when that comes into conflict with other goals, e.g. maximizing their utility / satisfying preferences.

Hi,

Staying in your current job for a bit to help your family (as well as build a bit of runway) makes a lot of sense.

Re future career paths:

  • If you are interested in getting into policy in your home country: I'm not sure which South Asian country you're from, if it's India, I've seen some IAS officers getting degrees from top US policy schools. Having such talents joining the civil service sounds like it could have really positive impact, but I'm not sure if working there will be frustrating. It's probably good to talk to people who have worked there.
    • Another idea is to get into a non-profit that works in your home country. E.g. I work at IDinsight and in our India office there are a few Indian nationals with degrees from top US policy schools. They work on engagements with governments, foundations and non-profits in India. Having local connections and context seems to really help with this type of work. Some other options including CHAI and Evidence Action. Also there are a number of EA non-profits working in India, like Fortify Health and Suvita. (Probably more in the animal welfare space if you're interested.)
  • Doing tech work for socially impactful orgs could be a good path too, e.g.:

Overall, as long as you are not sick of your current job, as it has a good work life balance it seems like a good place to be while you learn about different options (and gives you some financial security). So you're in a good place to explore!

Hey Johannes, I don't have ideas for a strictly speaking EA org, but here are some examples where chat bots have helped in public/social sector or humanitarian contexts -- perhaps they can give you some ideas on NGO partners who may benefit:

• DoNotPay, a "robot lawyer" app that uses NLP models to provide legal advice to users, has assisted people with asylum applications in the US and Canada 
HelloVote, which helps voters find voting information and sends reminders to vote 
• UNICEF's U-Report collects opinions from marginalized communities from around the world, which can provide information relevant for decisions by governments and non-profits 
Raheem.ai allows people across the US to report on police conduct and partners with communities to use data collected to hold police accountable 
GYANT, a chat bot that provides diagnosis and advice based on symptoms reported, including assessing the likelihood that someone has COVID-19 and Zika 
• Praekelt, a South African non-profit, has a few mobile health programs, including HealthConnect which has provided information on COVID-19 (some NLP elements), and MomConnect, an SMS-based help desk to provide health advice to new mothers (no NLP elements so far but it is being explored; languages are not English)

Load more