smclare

Applied Researcher at Founders Pledge

Comments

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Yeah, I don't blame Linch for passing on this question since I think the answer is basically "We don't know and it seems really hard to find out."

That said, it seems that forecasting research has legitimately helped us get better at sussing out nonsense and improving predictions about geopolitical events. Maybe it can improve our epistemic status on ex risks too. Given that there don't seem to be too many other promising candidates in this space, more work to gauge the feasibility of longterm forecasting and test different techniques for improving it seems like it would be valuable.

Informational Lobbying: Theory and Effectiveness

Thanks for this! Something that came to my mind as I was reading this was that it might be time for an update of CEA's list of good policy ideas that won't happen (yet).

You wrote that "It seems like, given an already-existing basket of policies we'd be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public." This made me think it might be useful be to make a list of EA-relevant policy ideas and start organizing them into a Charity Entrepreneurship-style spreadsheet. Something I'll keep musing on!

I'm also curious about what motivated you to take on this project, and what you're planning to work on next?

Informational Lobbying: Theory and Effectiveness

Wow, this is really fantastic work! Thank you for the effort you put into this. Overall I think this paints a more optimistic picture of lobbying than I would have expected, which I find encouraging.

To follow up on a couple specific points:

(1) Just in terms of my own project planning, do you have an estimate of how long you spent on this? If you had another 40 hours, what uncertainties would you seek to reduce?

(2) Your discussion of Bumgartner et al. (2009) is super interesting. You write "Policy change happens over a long time frame." I wonder if you could expand on this briefly. Do you mean that it takes a lot of lobbying over years before a policy change happens, or do you mean that meaningful policy change happens through incremental policy changes over time?

(3) Your finding that lobbying which protects the status quo is much more likely to be effective seems particularly actionable. I mean, once put into words it seems obvious, but it's a point I hadn't thought about before. I notice, though, that your list of ideas seems to consist of positive changes rather than status quo protection. I wonder if it would be worth brainstorming a list of good status quo issues that might be under threat. Protecting these would be less exciting than big changes, but for exactly the reasons you outline here more likely to work!

(4) I'm interested in thinking a bit more about uncertainty about policy implementation. This is something that we're currently grappling with in our models of policy change where I work (Founders Pledge). On the one hand, the Tullock Paradox suggests that we should expect lobbying to be extremely difficult (otherwise everyone would do a lot more of it). On the other hand, we've noticed that very good policy advocates seem to quite regularly affect meaningful policy changes (for example, it seems like the Clean Air Task Force regularly succeeds in their work).

In your model you write that "the change in probability of policy implementation lies with 95% confidence between 0 and 5%, and is distributed normally." I'm not sure about this, but I imagine the distribution of "chance of affecting policy success" over all the possible policies we could work on is much flatter than this. Or perhaps it's bimodal: there are some issues on which it is near impossible to make progress and some issues where we could definitely get policies implemented if we spent a certain amount of money in the right way.

Perhaps we want to start with a low prior chance of policy success, and then update way up or down based on which policy we're working on. Do you think we'd be able to identify highly-likely policies in practice?

(5) I found this post super helpful, but overall I think I'm still quite puzzled by the Tullock Paradox. If anything I'm more confused now, given that this post made me update in favour of policy advocacy. I think perhaps something that's missing here is a discussion of incentives within the civil service or bureaucracy. A policy proposal like taking ICBMs off hair-trigger alert just seems so obvious, so good, and so easy that I think there must be some illegible institutional factors within the decision-making structure stopping it from happening. I don't blame you for excluding this issue considering the size of this post and the amount of research you've already done, but it seems worth flagging!

Thanks again for a great post! I'm really excited about more work in this vein.

What questions would you like to see forecasts on from the Metaculus community?

Some fun, useful questions with shorter time horizons could be stuff like:

  • Will GiveWell add a new Top Charity to its list in 2020 (i.e. a Top Charity they haven't previously recommended)?
  • How much money will the EA Funds grant in 2020? (total or broken down by Fund)
  • How many new charities will Charity Entrepreneurship launch in 2020?
  • How many members will Giving What We Can have at the end of 2020?
  • How many articles in [The Economist/The New York Times/...?] will include the phrase "effective altruism" in 2020?

Stuff on global development and global poverty could also be useful. I don't know if we have data to resolve them, but questions like:

  • What will the global poverty rate be in 2021, as reported by the World Bank?
  • How many malaria deaths will there be in 2021?
  • How many countries will grow their GDP by more than 5% in 2021?
How do i know a charity is actually effective

I'm slightly confused by the part where you say you're struggling to understand effectiveness on an "emotional" level. Are your doubts about the state of our knowledge about charity effectiveness, or are you struggling to feel an emotional connection to the work of the charities we've identified as highly effective?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Lots of EAs seem pretty excited about forecasting, and especially how it might be applied to help assess the value of existential risk projects. Do you think forecasting is underrated or overrated in the EA community?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Most of the forecasting work covered in Expert Political Judgement and Superforecasting related to questions with time horizons of 1-6 months. It doesn't seem like we know much about the feasibility or usefulness of forecasting on longer timescales. Do you think longer-range forecasting, e.g. on timescales relevant to existential risk, is feasible? Do you think it's useful now, or do you think we need to do more research on how to make these forecasts first?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Good forecasts seem kind of like a public good to me: valuable to the world, but costly to produce and the forecaster doesn't benefit much personally. What motivates you to spend time forecasting?

Antibiotic resistance and meat: why we should be careful in assigning blame

Great post, thanks for this. I'll stop chucking in "antibiotic resistance" as a reason to reduce factory farming. I'll focus on stronger reasons. I think a longer post on this topic would be useful.

On horizontal gene transfer, you write "This last mechanism could potentially be the most important one, but we do not know how common such transfer is or what share of the resistance burden for humans it causes." Without more information this is not particularly reassuring for me. Do we truly know nothing about how common or potentially important this is? I'd love to see you give a sense of your intuitions here, even if they're based on theorizing, speculating, or very weak evidence.

Million dollar donation: penny for your thoughts?

One thing to note about the bounds of the FP cost-effectiveness estimate is that they aren't equivalent to a 95% confidence interval. Instead they've been calculated by multiplying through the most extreme plausible values for each variable on our cost-effectiveness calculation. This means they correspond to an absolute, unimaginably bad worst case scenario and an absolute, unfathomably good best case scenario. We understand that this is far from ideal: first, cost-effectiveness estimates that span 6+ orders of magnitude aren't that helpful for cause prioritization; second, they probably overrepresent our actual uncertainty.

On TaRL specifically, the effects seem really good--whether or not we can get governments to implement TaRL effectively seems to be where most of the uncertainty lies.

Load More