MG

mal_graham🔸

Strategy Director @ Wild Animal Initiative
783 karmaJoined Working (6-15 years)Philadelphia, PA, USA

Comments
33

Re: your footnote: I think this depends heavily on how severe we are talking. I don't have a strong opinion, because I really think no one has looked at it, about how much more severe things can get from disease than from something like keel bone fractures. A priori it doesn't seem unreasonable to assume that the artificial conditions of factory farming enable a chicken to live in pain much longer, and therefore have higher overall suffering, than we would ever see in the wild -- but I'm not that confident in that idea, so it would be good to look at more diseases. The point being that a severe enough disease could still be worth working on in dalys/dollar terms even if it doesn't affect that many individuals, and that would also make it more ecologically inert in many cases (since changing the circumstances of very large numbers of animals seems riskier). 

WAI facilitated a grant from Coefficient (then OP) years ago to look at disease severity; they came out with a few papers recently here and here. As is perhaps unsurprising, but disappointing, much of the research on disease in wildlife doesn't provide enough info to do a good job estimating the welfare burden. But the high scoring bacterial zoonoses in the first paper could be a good place to start a research project attempting to better assess the severity and numerosity compared to FAW conditions (as a cost effectiveness bar). 

I've been thinking about this approach since last year, and haven't had time to prioritize it to do detailed work on a framework, but I have some initial thoughts. I think you're right that, if you're comfortable with that sort of cluelessness, this kind of thing is relatively safe to do (although as @Michael St Jules 🔸 notes, I'd also want to do some proper population modeling in a highly studied ecosystem to get some grounding for the idea). 

But I think you can actually do better than focusing on only the very worst diseases depending on population parameters. For example, in populations that are top-down regulated (i.e., the population size is held under the carrying capacity of the resource by an external factor), you would not expect increases in starvation as a result of removing a disease (caveat: if that disease *is* the top down regulator, than you would have a problem - which unfortunately is the case in many CWD contexts). So then the disease doesn't need to be worse than both starvation and predation, say, but rather just worse than predation. The population size would equilibriate somewhere a bit higher, but the top-down regulation creates a buffer between population size and resource carrying capacity, and at high enough predation pressures you might reasonably expect almost no population increase. 

So I think in an ideal case, you'd identify (1) a high suffering disease that (2) affects a population primarily controlled by intense predation pressure in (3) a predator that mainly eats the target population (so the increases in predator population sizes don't affect other animals, who aren't having a disease treated and for whom this would just represent an increase in suffering). 

Of course, if you have a population with high predation pressure, the target population probably dies very quickly after getting the disease, so the suffering caused by the disease might not be very long in duration. But if its a really awful disease that could still be a lot of suffering. 

I don't think anyone's done a scan of the literature for diseases with these properties, and I doubt you'd easily find a perfect case -- most populations are a mixture of top-down and bottom-up regulation. But I also think that probably my few hours of playing around with these ideas on the side of my other work are not likely to be the final word on the question :) so I'm optimistic someone spending a lot more time with this could identify other "ecological profiles" of diseases that make them "safer" in indirect effects terms to work on than others (I think there's some things to say about bottom-up regulated populations as well, for example -- probably there you would want a disease that is a lot worse than starvation). 
 

@Eli Rose🔸  I think Anthony is referring to a call he and I had :)

@Anthony DiGiovanni I think I meant more like there was a justification of the basic intuition bracketing is trying to capture as being similar to how someone might make decisions in their life, where we may also be clueless about many of the effects of moving home or taking a new job, but still move forward. But I could be misremembering!  Just read your comment more carefully and I think you're right that this conversation is what I was thinking of.  

This is not an unreasonable take, but just in the interest of having an accurate public record, I'm actually the strategy director for WAI (although I was the executive director previously). Also, none of us at Arthropoda are technically animal welfare scientists. Our training is all in different things (for example, my PhD is in engineering mechanics and Bob's a philosopher who published a lot of skeptical pieces on insects)

Basically, I think we came to Arthropoda because the work we did before that changed our minds. More importantly, I don't think the majority of Arthropoda's work will be about checking for sentience? Rather, we're taking a precautionary framework about insects being sentient and asking how to improve their welfare if they are. In this context our views on sentience seem less likely to cause a COI -- although I also expect all our research to be publicly available for people to red-team as needed :)

Finally, fully agree on the extreme personnel overlap. I would love to not be co-running a bug granting charity as a volunteer in addition to my two other jobs! But the resource constraints and unusualness of this space are unfortunately not particularly conducive to finding a ton of people willing to take on leadership roles. 

All very interesting, and yes let's talk more later! 

One quick thing: Sorry my comment was unclear -- when I said "precise probabilities" I meant the overall approach, which amounts to trying to quantify everything about an intervention when deciding its cost effectiveness (perhaps the post was also unclear). 

I think most people in EA/AW spaces use the general term "precise probabilities" the same way you're describing, but perhaps there is on average a tendency toward the more scientific style of needing more specific evidence for those numbers. That wasn't necessarily true of early actors in the WAW space and I think it had some mildly unfortunate consequences. 

But this makes me realize I should not have named the approach that way in the original post, and should have called it something like the "quantify as much as possible" approach. I think that approach requires using precise probabilities -- since if you allow imprecise ones you end up with a lot of things being indeterminate -- but there's more to it than just endorsing precise probabilities over imprecise ones (at least as I've seen it appear in WAW). 

Thanks Eli!

I sort of wonder if some people in the AI community -- any maybe you, from what you've said here? -- are using precise probabilities to get to the conclusion that you want to work primarily on AI stuff, and then spotlighting to that cause area when you're analyzing at the level of interventions. 

I think someone using precise probabilities all the way down is building a lot more explicit models every time they consider a specific intervention. Like if you're contemplating running a fellowship program for AI interested people, and you have animals in your moral circle, you're going to have to build this botec that includes the probability an X% of the people you bring into the fellowship are not going to care about animals and likely, if they get a policy role, to pass policies that are really bad for them. And all sorts of things like that. So your output would be a bunch of hypotheses about exactly how these fellows are going to benefit AI policy, and some precise probabilities about how those policy benefits are going to help people, and possibly animals to what degree, etc. 

I sort of suspect that only a handful of people are trying to do this, and I get why! I made a reasonably straightforward botec for calculating the benefits to birds of bird-safe glass, that accounted for backfire to birds, and it took a lot of research effort. If you asked me how bird-safe glass policy is going to affect AI risk after all that, I might throw my computer at you. But I think the precise probabilities approach would imply that I should.  

Re:

It might be interesting to move out of high-level reason zone entirely and just look at the interventions, e.g. directly compare the robustness of installing bird-safe glass in a building vs. something like developing new technical techniques to help us avoid losing control of AIs.


I'm definitely interested in robustness comparisons but not always sure how they would work, especially given uncertainty about what robustness means. I suspect some of these things will hinge on how optimistic you are about the value of life. I think the animal community attracts a lot more folks who are skeptical about humans being good stewards of the world, and so are less convinced that a rogue AI would be worse in expectation (and even folks who are skeptical that extinction would be bad). So I worry AI folks would view "preserving the value of the future" as extremely obviously positive by default, and that (at least some) animal folks wouldn't, and that would end up being the crux about whether these interventions are in fact robust. But perhaps you could still have interesting discussions among folks who are aligned on certain premises. 

Re:

What would the justification standards in wild animal welfare say about uncertainty-laden decisions that involve neither AI nor animals: e.g. as a government, deciding which policies to enact, or as a US citizen, deciding who to vote for President?

Yeah, I think this is a feeling that the folks working on bracketing are trying to capture: that in quotidian decision-making contexts, we generally use the factors we aren't clueless about (@Anthony DiGiovanni -- I think I recall a bracketing piece explicitly making a comparison to day-to-day decision making, but now can't find it... so correct me if I'm wrong!). So I'm interested to see how that progresses.

I suspect though, that people generally just don't think about justification that much. In the case of WAW-tractability-skeptics, I'd guess some large percentage are likely more driven by the (not unreasonable at first glance) intuition that messing around in nature is risky. The problem of course is that all of life is just messing around in nature, so there's no avoiding it. 

Yeah, I could have made that more clear -- I am more focused on the sociology of justification. I supposed if you're  talking pure epistemics, it depends whether you're constructivist about epistemological truth. If you are, then you'd probably have a similar position -- that different communities can reasonably end up with justification standards, and no one community have more claim to truth than the other. 

I suspect, though, that most EAs are not constructivists about epistemology, and so vaguely think that some communities have better justification standards than others. If that's right, then the point is more sociological: that some communities are more rigorous about this stuff than others, or even that they might use the same justification standards but differ in some other way (like not caring about animals) that means the process looks a little different.  So the critic I'm modeling in the post is saying something like: "sure, some people do justification better than others, but these are different communities so it makes sense that some communities care more about getting this right than others do."

I guess another angle could be from meta-epistemic uncertainty. Like if we think there is a truth about what kinds of justification practices are better than others, but we're deeply uncertain about what it is, it may then still seem quite reasonable that different groups are trying different things, especially if they aren't trying to participate in the same justificatory community. 

Not entirely sure I've gotten all the philosophical terms technically right here, but hopefully the point I'm trying to make is clear enough!

Hi Vasco! As we’ve discussed in other threads/emails/etc, we have different meta-ethical views and different views about consciousness. So I’m not surprised we’ve landed in somewhat different places on this issue :)

Bob and I make most of the strategic and granting decisions for Arthropoda, and we have slightly different views, so I don’t know exactly where we will land (he'll reply in a second with his thoughts). But broadly, we both agree that we don’t think soil nematodes and some other soil invertebrates have enough likelihood of being sentient to be a high priority, nor do we think that (for those that are sentient) we have a good enough understanding of what would help them to make action-oriented grants (which is Arthropoda’s focus) — in part because we don’t endorse precise-probabilities approaches to handling uncertainty, and so want to make grants that are aimed towards actions that appear robustly positive under a range of possible probability assignments/ways of handling uncertainty. 

That said, our confidence in our own position is not high. So, we’d be willing to fund things to challenge our own views: If we had sufficient funding from folks interested in the question, Arthropoda would fund a grant round specifically on soil invertebrate sentience and relevant natural history studies (especially in ways that attempt to capture the likely enormous range of differences between species in this group). Currently, much of our grant-making funds are restricted (at least informally) to farmed insects and shrimp, so it’s not an option. 

As a result, I expect that Arthropoda is probably still one of the better bets for soil invertebrate interested donors. As a correction to your comment, Arthropoda is not restricted in focus as a matter of principle, but just has happened for contingent reasons to focus on farmed animals in its first rounds. We collaborate with Wild Animal Initiative (I’m the strategy director at WAI) to reduce duplication of effort, and have a slightly better public profile for running soil invertebrate studies, so we expect it will generally be Arthropoda rather than WAI who would be more likely to run this kind of program. I don’t want to speak for CWAW, so I’ll let them reply if they have interests in this area; but from my own conversations I doubt they would be in a good position to make soil invertebrates a priority in the next couple of years. Finally, you haven’t mentioned them, but Rethink Priorities may also be open to some work in this area (I’m not sure though). 

Arthropoda treasurer here - pretty much option 2. We are hoping to increase our expenditure next year to run an extra grants round, add a contractor to help manage some things (currently we're almost entirely volunteer), add a bit to our strategic reserve (to carry us through donation fluctuations without needing to pause grant-making), and a few other small bits and pieces. A good chunk of this expansion can be covered by our reserves + some existing donor commitments, and 55k is about what's left. 

We have actually a much higher room for more funding in theory, up to several million to run a couple of targeted programs we have in mind. These activities would require hiring someone to run them as a program manager as well as a lot more in grants. But we're not really expecting EA Forum readers to fill that gap unless they happen to run a large foundation :)

 

haha I can confirm I did not karma knock you and I was kind of surprised you had gotten so downvoted! I actually upvoted when I saw that to counteract. 

One random thought I'll add is that since you are most experienced (afaict?) in ghd, I'd expect your arguments to be at their best in that context, so you getting upvoted on GHD and downvoted on AW is at least consistent with having more expertise in one than the other, so not necessarily evidence that AW folks are more sensitive. Although I'm not ruling that out!

The other thing I'm not sure I understand is how much weight a single individual's downvote can have - is there any chance that a few AW people have a ton of karma here, so that just a few people downvoting can take you negative in a way that wouldn't happen as much in GHD? 

Load more