Miranda_Zhang's Shortform

21 comments, sorted by Highlighting new comments since Today at 8:04 AM
New Comment

Just finished Semple (2021), 'Good Enough for Government Work? Life-Evaluation and Public Policy,' which I found fascinating for its synthesis of philosophy + economics + public policy, and potential relevance to EA (in particular, improving institutional decisionmaking).

The premise of the paper is essentially, "Normative policy analysis—ascertaining what government should do—is not just a philosophical exercise. It is (or should be) an essential task for people working in government, as well as people outside government who care about what government does. Life-evaluationist welfare-consequentialism is a practical and workable approach." 

Some things that are potentially EA-relevant

  • It gives brief policy analysis using a prioritarian welfare-consequentialism lens
  • It mentions unborn people, foreign residents, and animals as worthy of government + moral concern under welfare-consequentialism
  • It avoids having to defining welfare (and, implicitly, addresses the limitation of QALYs re: difficulty in comparing between one's current and alternate lives)
  • The inclusion of preferences reminds me of negative ideal preference utilitarianism

I'm (still!!!) thinking about my BA thesis research question and I think my main uncertainty/decision point is what specific policy debate to investigate. I've narrowed it down to two so far - hopefully I don't expand - and really welcome thoughts.

Context: I am examining the relationship between narratives deployed by experts on Twitter and the Biden Administration's policymaking process re: COVID-19 vaccine diplomacy. Specifically,  I want to examine a debate on an issue wherein EA-aligned experts have generally coalesced around one stance.

Motivating questions/insights:

  1. COVID-19 policymakers solicited help from experts
    1. However, the U.S. public's trust in experts has varied. It may have peaked last year and now be declining
  2. Vaccine diplomacy (along with all health policy) is not solely an issue of 'following the science'
    1. This is not to say that data or rationality is not important. In fact, I would be extremely interested in investigating whether the combination  of scientific evidence + thematic framing is more effective than either alone.
      1. However, that would be an experimental study which is not something I am interested in.
    2. This suggests I might want to investigate the presence of scientific vs thematic elements in expert narratives. Not sure though... It's not what I'm immediately drawn to
  3. Evidence/science alone is insufficient. Experts need to be able to tell stories/persuade/make a moral or emotional appeal. (Extrapolated from the claim that narratives can be influential in policymaking)
    1. At the very least, experts should make clear that no decision is value-neutral and the specific values they are prioritizing in their recommendation
    2. Now that I think about it, the fact that I'm 'not sure' about this re: COVID-19 might mean this would make for a good RQ? Or maybe I'm just not thinking of the relevant literature right now.

The two debates below, including general thoughts

  1. The COVID-19 TRIPS Waiver (waiving IP)
    1. What most excites me about this: The Biden Admin did a strong 'about-face' on this and the discourse around this was very rich (involved many actors with strong opinions, and entwines with debates around vaccine sharing etc.).
    2. Main hesitation: I don't know how to think about experts as an actor here. Should they be considered a coalition, per the Advocacy Coalition Framework? Or should I look at a specific set of aligned expert organizations/individuals? Or should I look at all experts on Twitter?
      1. But ACF emphasizes long-term policymaking and shared beliefs - and it seems like there was no singular expert consensus on whether the TRIPS waiver would be a net good. Now that I think about it, this might be due to a lack of transparency over what is being [morally] prioritized...
      2. But why focus on aligned orgs/individuals? How can I justify that? How generalizable is that even?
      3. But if I include all experts, including experts who might have other avenues to policy influence (e.g. big think tanks or former officials), then why not also examine non-expert narratives?
        1. Specifically, the rationale behind examine Twitter is that it provides a highly-accessible advocacy platform to people who do not otherwise have much visibility/leverage
        2. Also, looking at a wide range of Tweets helps get a sense of the general narrative
  2. Delaying child vaccinations (viz. the WHO's recommendation)
    1. What most excites me about this: There is an explicit non-epistemic debate here (prioritizing children domestically vs the global poor), and that is what I care the most about. There still remains a scientific/epistemic component, too: "Are children safe without vaccines?"
      1. Additionally, there is an added controversial non-epistemic element of anti-maskers
    2. Main hesitation: But the Biden administration hasn't really 'made a policy' on this. So what policy process would I be examining?
      1. This also straddles the line between domestic and international, in that the debate is primarily about picking between the two (in contrast to the first debate), which could be tricky

*edited for clarity - was in a rush when I posted!

These both seem like great options! Of the two, I think the first has more to play with as there is a pretty clear delineation between the epistemic vs. moral elements of the second, whereas I think debates about the first have those all jumbled up and it's thus more interesting/valuable to untangle them. I don't totally understand your hesitation so I'm afraid I can't offer much insight there, but with respect to long-term policymaking/shared beliefs, it does seem like the fault lines mapped onto fairly clear pro-free-market vs. pro-redistributive ideologies that drew the types of advocates one would have predicted given that divide.

*edit 3: After reading more on Epistemic Communities, I think I'm back where I started.
*edit 4: I am questioning, now, whether I need a framework of how experts influence policymaking at all ... Maybe I should conceptualize my actors more broadly but narrow the topic to, say, the use of evidence in narratives?

I really appreciate your response, Ian! I think it makes sense that the more convoluted status of the first debate would make it a more valuable question to investigate.

My hesitation was not worded accessibly or clearly - it was too grounded in the specific frameworks I'm struggling to apply - so let me reword: it doesn't seem accurate to claim that there was one expert consensus (i.e. primarily pro-/anti-waiver).  Given that, I am not sure a) how to break down the category of 'expert' - although you provide one suggestion, which is helpful - and b) how strongly I can justify focusing on experts, given that there isn't a clear divide between "what experts think" and "what non-experts think."

Non-TL;DR:

My main concern with investigating the debate around the TRIPS waiver is that there doesn't seem to be a clear expert consensus. I'm not even sure there's a clear EA-aligned consensus, although the few EAs I saw speak on this (e.g. Rob Wiblin) seemed to favor donating over waiving IP (which seems like a common argument from Europe). Given that, I question

  1. the validity of investigating 'expert narratives' because 'experts' didn't really agree there
    1. However, I don't know if it would be in/valid (per the theories I want to draw from, e.g. Advocacy Coalition Framework (ACF) or Epistemic Communities), so that would be one of my next steps.
      1. This particular description worries me: "Advocacy coalitions are all those defined by political actors who share certain ideas and who coordinate among themselves in a functional way to suggest specific issues to the government and influence in the decision-making process."
      2. This would be subverted by your suggestion, though, as I note in point 3!
  2. the validity of investigating expert narratives specifically instead of the general public—if experts didn't coalesce around a specific stance, what's my justification for investigating them specifically instead of getting a sense of the public generally? ACF explicitly notes that "common belief systems bind members of a coalition together." Given that the pro-/anti-waiver coalitions  are defined by common beliefs held by both experts and non-experts (e.g.  pro-free-market), how can I justify exclusively focusing on experts?
    1. This is probably not a valid concern, now that I think about it. After all, my thesis hinges upon the idea that experts help inform policymakers + policymaking, so it makes sense to focus on their narratives rather than looking at the public as a whole...
    2. However, it seems like focusing exclusively on two expert groups is valid at least within the Epistemic Community framework, so perhaps this would work if it turns out that certain kinds of experts advocated for the same stance.
  3. whom I should focus on—without being able to lump all experts together, how should I break them down?
    1. Perhaps I could subdivide experts into coalitions - e.g. experts for the waiver and experts against the waiver? (This is akin to the fault lines you mention)
      1. I still feel kind of iffy about investigating experts specifically here, instead of the general public, particularly because I could use the same coalitional divide (pro-/anti-waiver)
    2. Or should I focus on EA-aligned experts specifically?
      1. But I don't know how to justify this... It doesn't seem like the smartest research practice

Suggestion: use an expert lens, but make the division you're looking at [experts connected to/with influence in the Biden administration] vs. ["outside" experts].

Rationale: The Biden administration thinks of and presents itself to the public as technocratic and guided by science, but as with any administration politics and access play a role as well. As you noted, the Biden administration did a clear about-face on this despite a lack of a clear consensus from experts in the public sphere. So why did that happen, and what role did expert influence play in driving it? Put another way, which experts was the administration listening to, and what does that suggest for how experts might be able to make change during the Biden administration's tenure?

Hmm! Yes, that's interesting - and aligns with the fact that many different policy influencers weighed in, ranging from former to current policymakers. Thank you very much for this!

I think something I'm worried about is how I can conceptualize [inside experts] vs. [outside experts] ... It seems like a potentially arbitrary divide and/or a very complex undertaking given the lack of transparency into the policy process (i.e. who actually wields influence and access to Biden and Katherine Tai, on this specific issue?).

It also complicates the investigation by adding in the element of access as a factor, rather than purely thinking about narrative strategies - and I very much want to focus on narratives. On one hand, I think that could be interesting - e.g. looking at narrative strategies across levels of access. On the other, I'm uncertain that looking at narrative strategies would add much compared to just analyzing the stances of actors within the sphere of influence.

What do you think of this alternate RQ: "How did pro/anti-waiver coalitions use evidence in their narratives?"

Moves away from the focus on experts but still gets to the scientific/epistemic component.

(I'm also wondering whether I am being overly concerned with theoretically justifying things!)

(I'm also wondering whether I am being overly concerned with theoretically justifying things!)

I think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.

Thanks! I'll take a break from thinking about the theory - ironically, I am fairly confident I don't want to go into academia.

Again, appreciate your thoughts on this. Hope I'll hear from you again if I post another Shortform about my thesis!

I know that carbon offsets (and effective climate giving) are a fairly common topic of discussion, but I've yet to see any thoughts on the newly-launched Climate Vault. It seems like a novel take on offsetting: your funds go to purchasing cap-and-trade permits which will then be sold to fund carbon dioxide removal (CDR).

I like it because it a) uses (and potentially improves upon) a flawed government program in a beneficial way, and b) I can both fund the limitation of carbon emissions and the removal, unlike other offsets which only do the latter.

However, I recognize that I have a blind spot because I respect Michael Greenstone. Some doubts:

  • The CDR funding will be allocated based on an RFP rather than directly funding existing solutions (e.g. Climeworks) which lowers my confidence in their ability to definitely find + fund CDR equivalent to the value of the permits they hold. However this is a pretty minor concern, in part because they shouldn't sell the permits until they find a solution they are confident in; although even then, I am concerned they might pick something without a great track record.
  • Is it really the most efficient to buy-then-sell these permits, rather than simply investing the funds and later funding the most efficient CDRs?
  • This is super new so there's basically no data or public vetting, afaik.

If anyone has thoughts, would appreciate them!

"How do you convert a permit into CO2 removal using CDR technologies without selling them back into the compliance market – in effect negating the offset?

We will sell the permits back into the market, but only when we’re ready to use the proceeds to fund carbon removal projects equivalent to the number of permits we’re selling, or more. So, in effect, the permits going back onto the market are negated by the tons of carbon we are paying to remove."

Once credible CDR is so cheap (now > USD 100/t, most approaches over USD 600, cf Stripe Climate) that this works (current carbon prices around USD 20), the value of additional CDR tech support is pretty low because the learning curve has already been brought down.

Am I missing something?

It seems like a good way to buy  allowances which is, when the cap is fixed (also addressed in the FAQ, though not 100% convincingly) better than buying most offsets, but it seems unlikely to work in the way intended.
 

Hmm okay! Thanks so much for this. So I suppose the main uncertainties for me are

  • whether I trust that the cap will remain fixed
  • whether the cap-and-trade system is more effective than the offsets I was considering

Really appreciate you helping clarify this for me!

Two thoughts inspired by UChicago EA's discussion post-Ben-Todd's-talk at EAG:

  1. I am aware that there have been some efforts targeted towards high schoolers (I believe Stanford EA ran a workshop/program). Has there been any HS outreach targeting debaters specifically, e.g. a large-scale debate tournament? I'm thinking of, say, introducing EA-relevant debate topics to a big tournament or group
  2. Has there been any middle-school outreach?

On #1: There has been  a large-scale EA-themed debate tournament targeting debaters (mainly undergraduates I believe) organized by Dan Lahav from EA Israel, talked about here!

Very useful, thank you! Apparently they did a trial with high schoolers, so I've reached out : )

At work so have no mental space to read this carefully right now, but wonder if anyone has thoughts - specifically about whether there's any EA-relevant content: MIT Predicted in 1972 That Society Will Collapse This Century. New Research Shows We’re on Schedule. (vice.com)

These models predicted growth followed by collapse. The first part has been proven correct, but there is little evidence for the second. Acting like past observations of growth are evidence of future collapse seems like an unusual example of Goodman's New Riddle of Induction in the wild.

Thank you, so helpful!

To clarify - "little evidence" implies that you think observations of current conditions aligning with model predictions, e.g. "Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments," are weak?

Would it be useful to compile EA-relevant press?

Inspired by me seeing this Vice article on wet-bulb conditions (a seemingly unlikely route for climate change to become an existential risk): Scientists Studying Temperature at Which Humans Spontaneously Die With Increasing Urgency 

If so, what/how? I don't think full-time monitoring makes sense (first rule of comms: do everything with a full comms strategy in mind!) but I wonder if a list or Airtable would still be useful for organizations to pull from or something...

I think David Nash does something similar with his EA Updates (here is the most recent one). While most of the links are focused on EA Forum and posts by EA/EA-adj orgs, he features occasional links from other venues.

Good flag, thanks!

My hope is that people who see EA-relevant press will post it here (even in Shortform!). 

I also track a lot of blogs for the EA Newsletter and scan Twitter for any mention of effective altruism, which means I catch a lot of the most directly relevant media. But EA's domain is the entire world, so no one person will catch everything important. That's what the Forum is for :-)

I'm not sure whether you're picturing a project specific to stories about EA or one that covers many other topics. In the case of the former, me and others at CEA know about nearly everything (though we don't have it in a database; no one ever asks). In the case of the latter, the "database" in question would probably just be... Google? I'm having trouble picturing the scenario where an org needs to pull from a list of articles they wouldn't find otherwise. (But I'm open to being convinced!)