All of C Tilli's Comments + Replies

Thank you Shaun!

I found myself wondering where we would fit AI Law / AI Policy into that model.

I would think policy work might be spread out over the landscape? As an example, if we think of policy work aiming to establishing the use of certain evaluations of systems, such evaluations could target different kinds of risk/qualities that would map to different parts of the diagram?

Interesting perspective!

I personally believe that many, if not most, of the world's most pressing problems are political problems, at least in part.

I agree! But if this is true, doesn't it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems... unambitious to just accept that?

I also think that many people in EA do work with political questions, and my guess would be that some do it very well - but that most ... (read more)

Great discussion! I think perhaps there is some subtle conflict between EA's goal of a "radically better world" and marginal cost effectiveness. For marginal cost effectiveness, I think EA does a good job and the ITN framework is helpful. However, if we want, as CEA states, to contribute to solve "...a range of pressing global problems — like global poverty, factory farming, and existential risk", I think we need to get much more politically involved. I actually think this has happened in EA already and I have sensed a big shift with the focus on AI where ... (read more)

Thanks you for this comment - this is indeed very relevant context, much of which I was not previously aware of.

Thanks for commenting!

I think there are two different things to figure out: 1) should we engage with the situation at all? and 2) if we engage, what should we do/advocate for?
I might be wrong about this, but my perception so far is that many EAs based on some ITN reasoning answer the first question with a no, and then the second question becomes irrelevant. My main point here is that I think it is likely that the answer to the first question could be yes?
For this specific case I personally believe that a ceasefire would be more constructive than the alternative, but even if you disagree with that this would not automatically mean that the best thing is not to engage at all. Or do you think it does?

Strongly agree. Of course it's different what works for different people but I think it's a little odd that both EAG and EAGx seem to always be over the weekend, and I would be curious to see how the composition of attendees would shift if an event was held on work days.

Thanks, I'm glad you found it useful!

- Having spent a couple of months working on this topic, do you still think AI science capabilities are especially important to explore, cf AI in other contexts? I ask because I've been thinking and reading a lot about this recently, and I keep changing my mind about the answer.

Answering just for myself and not for the team: I don't have a confident answer to this. I have updated in the direction that capabilities for autonomous science work are more similar to general problem-solving capabilities than I thought previou... (read more)

Interesting!

What is your assessment of current risk awareness among the researchers you work with (outside of survey responses), and their interest in such perspectives? 

1
elteerkers
1y
Thanks for the question! I would say that it's not that people aren't aware of risks, my broad reflection is more in terms of how one relates to it. In the EA/X-risk community it is clear that one should take these things extremely seriously and do everything one can to prevent them. I often feel that even though researchers in general are very aware of potential risks with their technologies, they seem to get swept up in the daily business of just doing their work, and not reflecting very actively over the potential risks with it. I don't know exactly why that is, it could be that they don't consider it their personal responsibility, or perhaps they feel powerless and that aiming to push progress forward is either the best or the only option? But that is a question that would be interesting to dig deeper into!

Thank you so much for this post! It is SO nice to read about this in a framing that is inspiring/positive - I think it's unavoidable and not wrong that we often focus on criticism and problem description in relation to diversity/equality issues but that can also make it difficult and uninspiring to work with improvement. I love the framing you have here!

For me Magnify has been super important to balance my idea of what kind of people the EA movement consists of and to feel more at home in the community!

Thanks for this! I've been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you're interested!

Some specific comments:

In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.

Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publication... (read more)

1
PabloAMC
2y
Hey C Tilli, Thanks for commenting! My intuition is that forcing lower fees would make more money available for other parts of science. After all, most research is still done in universities and government agencies, and they usually have a limited budget to distribute each year. And honestly, I'm not sure if it so much works, it should be something that confederations of universities should be able to agree upon. I don't know. To me seems much harder to try to force people not to publish in those reputable journals. In a sense, I feel they are extracting rents from the environment in a damaging way. In particular, it seems to be possible from a back-of-the-envelope calculation that ~10% of the cost of making science goes to these people. My intuition for this was that when you have a single kind of task, you are both able to specialize more but also allows you to concentrate more on the work you are doing, rather than having to jump between them, which I believe kills (my) productivity: https://80000hours.org/podcast/episodes/cal-newport-industrial-revolution-for-office-work/. That being said, I might be wrong. I think that you are right in that social status can play some role, but I don't think this is the leading reason. The leading reason, to me, seems to be that it gives people purpose in a way that other things don't so much. In a way, people going into and sticking to academia seems more like a struggle to do work that advances human knowledge or their own knowledge; even if the environment is kind of shitty. That's why it feels weird to drop out of academic research. Of course, this is very subjective and I might be wrong.

Thanks, great to hear =)

I’m quite unsure about which ideas has the best ROI, and I think it would depend a lot on who was looking to execute a project which idea would be most suitable. That said, I’m personallymost excited about the potential of working with research policy at different levels - from my current understanding this just seems extremely neglected compared to how important it could be, and if I’d make a guess about which of these ideas I might myself be working on in a few years it would be research policy.

Short term, I’d be most excited to s... (read more)

9
PeterSlattery
2y
Quick thoughts:  Thanks for responding.  I generally agree! I also struggle to pick out an obvious highest priority choice.  Two I liked were: Identify the most significant institutions for shaping global research policy and investigate how their decisions are made, the size of their budgets and their priority areas. This should include a survey of previous research on science policy development (e.g. by SPRU). Investigate the implementation of previous research policies to understand how the wording of policy documents translate into specific funding allocation, research project proposals, and research results and publications. I also like the idea of more reviews to 'legitimate' new lines of research - this is something I have often tried to do in my own research. I think that some sort of community building might be one of the highest ROI activities that is being missed. The possible projects and their impacts are all going to be heavily mediated by the capacity of the community/available human capital. One reason why I like seeing these posts and the ongoing Slack conversations etc!

Cool - my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case? Perhaps we could schedule a call to talk further - I’ll send you a DM!

Interesting. I think a challenge would be to find the right level of complexity of a map like that - it needs to be simple enough to give a useful overview, but complex enough that it models everything that's necessary to make it a good tool for decisionmaking.

Who do you imagine would be the main user of such a mappning? And for which decisions would they mainly use it? I think the requirements would be quite different depending on if it's to be used by non-experts such as policymakers or grantmakers, or if it's to be used by researchers themselves?

2
Harrison Durland
2y
I agree the complexity level question is a tough question, although my impression has been that it could probably be implemented with varying levels of complexity (e.g., just focusing on simpler/more-objective characteristics like “data source used” or “experimental methodology” vs. also including theoretical arguments and assumptions). I think the primary users would tend to be researchers—who might then translate the findings into more-familiar terms or representations for the policymaker, especially if it does not become popular/widespread enough for some policymakers to be familiar with how to use or interpret it (somewhat similar to regression tables and similar analyses). That being said, I also see it as plausible that some policymakers would have enough basic understanding of the system to engage/explore on their own—like how some policymakers may be able to directly evaluate some regression findings. Ultimately, two examples of the primary use cases I envision are: 1. Identifying the ripple effects of changes in assumptions/beliefs/datasets/etc. Suppose for example an experimental finding or dataset which influenced dozens of studies is shown to be flawed: it would be helpful to have an initial outline of what claims and assumptions need to be reevaluated in light of the new finding. 2. Mapping the debate for a somewhat contentious subject (or just anything where the literature is not in agreement), including by identifying if any claims have been left unsupported or unchallenged. It seems that such insights might be helpful for a researcher trying to decide what to focus on (and/or a grantmaker trying to decide what research to fund).

Thanks for your comment! I'm uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it's a tricky one for sure, and I agree specific targeted advocacy seem less risky.

As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.

It's difficult to say exactly why, but  I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuine... (read more)

Thanks a lot for this post! I really appreciate it and think (as you also noted) that it could be really useful also for career decisions, all well as for structuring ideas around how to improve specific organizations.

we must be careful to avoid scenarios in which improving the technical quality of decision-making at an institution yields outcomes that are beneficial for the institution but harmful by the standards of the “better world”

I think this is a really important consideration that you highlight here. When working in an organization my hunch is that... (read more)

2
IanDavidMoss
3y
Thanks for the comment! With the caveat that I'm someone who's pretty pro-quantification in general and also unusually comfortable with high-uncertainty estimates, I didn't find the quantification process to be all that burdensome. In constructing the FDA case study, far more of my time was spent on qualitative research to understand the potential role the FDA might play in various x-risk scenarios than coming up with and running the numbers. Hope that helps!

Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research - examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it... (read more)

I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that "understanding the universe" can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much "understanding the universe" helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist). 

So I wouldn't frame it primarily as exploration vs exploitation, but

... (read more)
3
C Tilli
3y
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research - examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it's feasible to predict the consequences of fundamental research.

And totally agree about the Replacing Guilt series, it's really good.

Hi Miranda! I'm glad you liked it, and I hope you feel better now. Since it's been a while since I wrote this I realize my perspective changes a lot over time - it feels less like a conflict or a problem for me right now, and not necessarily because I have rationally figured something out, it's more like I have been focusing on other things and am generally in a better place. I don't know how useful that is to you or anyone else, but to some extent it might mean that things can sometimes get better even if we don't solve the issue that bothered us in the f... (read more)

5
Miranda_Zhang
3y
Thank you for responding to my comment and sharing your (more recent) experience! I agree that I don't need to 'solve' it intellectually - I've never felt like my philosophy holds me back from feeling fulfilled and I think the issue of low self-confidence is at least partly separate. I'm very glad to hear that you are in a better place now. :) The role model concept is definitely something I've heard before and while it doesn't really make self-care easy, I agree that it is useful - e.g. when I feel guilty about not working overtime, I remind myself that I would prefer + want to create a society that doesn't incessantly overwork. Why would anyone want to join a community that doesn't encourage individual flourishing?  Thank you again for your kind words and your offer! I think I'm good for now but will keep in mind. In the mean time, I hope to see you around the forum!
3
C Tilli
3y
And totally agree about the Replacing Guilt series, it's really good.

Notably, my definition is a broader tent  (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective.

I hadn't formulated it so clearly for myself, but at this stage I would say I'm using the same perspective as you - I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.

What I mean about this is that I think it's plausible

... (read more)
3
MichaelA
3y
* Hmm, I'm not sure I agree.  * Or at least, I think I'd somewhat confidently disagree that the ideal project aimed at doing "across-cause prioritisation" and "prioritisation in the context of differential (technological) progress" would look like more of the same sort of work done in this post.  * I'm not saying you're necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question. * To be clear, this is not to say I think this post was useless or doesn't help at all with those objectives!  * I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation * Though maybe it's not useful for prioritization in the context of differential progress * I also really liked the post's structure and clarity, and would be likely to at least skim further work you produce on this topic. * But I think for basically any cause area that hasn't yet received much "across-cause prioritisation" research, I'd be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.  * I explain my reasoning for a similar view in Should marginal longtermist donations support fundamental or intervention research? * And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here. * And I think the ideal project to do that for meta science would incorporate some components that are like what's done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress (This may sound harsher than my actual views - I do think this post was a useful contribution.)

I think that we have a rather similar view actually - maybe it's just the topic of the post that makes it seem like I am more pessimistic than I am? Even though this post focuses on mapping up problems in the research system, my point is not in any way that scientific research would be useless - rather the opposite, I think it is very valuable, and that is why I'm so interested in exploring if there are ways that it can be improved. It's not at all my intention to say that research, or researchers, or any other people working in the system for that matter,... (read more)

1
DirectedEvolution
3y
All these projects seem beneficial. I hadn't heard of any of them, so thanks for pointing them out. It's useful to frame this as "research on research," in that it's subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change. The reason I brought this up is that the conversation on improving the productivity of science seems to focus almost exclusively on problems with publishing and reproducibility, while neglecting the skill-building and internal-knowledge aspects of scientific research. Scientists seem to get a feel through their interactions with their colleagues for who is trustworthy and capable, and who is not. Without taking into account the sociology of science, it's hard to know whether measures taken to address problems with publishing and reproducibility will be focusing on the mechanisms by which progress can best be accelerated. Honest, hardworking academic STEM PIs seem to struggle with money and labor shortages. Why isn't there more money flowing into academic scientific research? Why aren't more people becoming scientists? The lack of money in STEM academia seems to me a consequence of politics. Why is there political reluctance to fund academic science at higher levels? Is academia to blame for part of this reluctance, or is the reason purely external to academia? I don't know the answers to these questions, but they seem important to address. Why don't more people strive to become academic STEM scientists? Partly, industry draws them away with better pay. Part of the fault lies in our school system, although I really don't know what exactly we should change. And part of the fault is probably in our cultural attitudes toward STEM. Many of the pro-reproducibility measures seem to assume that the fa

Thanks for this!

You make a good point, the part on funding priorities does become kind of circular. Initially the heading there was "Grantmakers are not driven by impact" - but that got confusing since I wanted to avoid defining impact (because that seemed like a rabbit hole that would make it impossible to finish the post). So I just changed it to "Funding priorities of grantmakers" - but your comment is valid with either wording, it does make sense that the one who spends the resources should set the priorities for what they want to achieve.

I think there... (read more)

5
Vhanon
3y
That sounds a better heading indeed. Although grantmakers define the value of a research outcome, they might not be able to correctly promote their vision due to their limited resources.  However, as the grantmaking process is what defines the value of a research, your heading might be misinterpreted as the inability to define valuable outcomes (which is in contradiction with your working hypothesis) What about "inefficient grant-giving"? "inefficient" because sometimes resources are lost pursing secondary goals, "grant-giving" because it specifically involves the process of selecting motivated and effective researchers teams.

Thank you for this perspective, very interesting.

I definitely agree with you that a field is not worthless just because the published figures are not reproducible. My assumption would be that even if it has value now, it could be a lot more valuable if reporting were more rigorous and transparent (and that potential increase in value would justify some serious effort to improve the rigorousness and transparency).

Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, an... (read more)

1
DirectedEvolution
3y
Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be? No, the purpose of publishing is not mainly to communicate to the public. After all, very few members of the public read scientific literature. The truth-seeking or engineering achievement the lab is aiming for is one thing. The experiments they run to get closer are another. And the descriptions of those experiments are a third thing. That third thing is what you get from the paper. I find it useful at this early stage in my career because it helps me find labs doing work that's of interest to me. Grantmakers and universities find them useful to decide who to give money to or who to hire. Publications show your work in a way that a letter of reference or a line on a resume just can't. Fellow researchers find them useful to see who's trying what approach to the phenomena of interest. Sometimes, an experiment and its writeup are so persuasive that they actually persuade somebody that the universe works differently than they'd thought. As you read more literature and speak with more scientists, you start to develop more of a sense of skepticism and of importance. What is the paper choosing to highlight, and what is it leaving out? Is the justification for this research really compelling, or is this just a hasty grab at a publication? Should I be impressed by this result? It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all. Conversations with researchers can help a lot. Read their work and then ask if you can have 20 minutes of their time; they'll often be happy to answer your questions. And yes, fields do seem to go down dead ends from time to time. My guess is it's some sort of self-reinforcing selection for biased, corrupt, gullible scientists who've come to depend on a cycle of hype-building to get the n

I think it's a really interesting, but also very difficult, idea. Perhaps one could identify a limited field of research where this would be especially valuable (or especially feasible, or ideally both), and try it out within that field as an experiment?

I would be very interested to know more if you have specific ideas of how to go about it.

2
Harrison Durland
3y
Yeah, I have thought that it would probably be nice to find a field where it would be valuable (based on how much the field is struggling with these issues X the importance of the research), but I've also wondered if it might be best to first look for a field that has a fitting/acceptive ethos--i.e., a field where a lot of researchers are open to trying the idea. (Of course, that would raise questions about whether it could see similar buy-in when applied to different fields, but the purpose of such an early test would be to identify "how useful is this when there is buy-in?") At the same time, I have also recognized that it would probably be difficult... although I do wonder just how difficult it would be--or at least, why exactly it might be difficult. Especially if the problem is mainly about buy-in, I have thought that it would probably be helpful to look at similar movements like the shift towards peer-reviewing as well as the push for open data/data transparency: how did they convince journals/researchers to be more transparent and collaborative? If this system actually proved useful and feasible, I feel like it might have a decent chance of eventually getting traction (even if it may go slow). The main concern I've had with the broader pipe dream I hinted at has been "who does the mapping/manages the systems?" Are the maps run by centralized authorities like journals or scientific associations (e.g., the APA), or is it mostly decentralized in that objects in the literature (individual studies, datasets, regressions, findings) have centrally-defined IDs but all of the connections (e.g., "X finding depends on Y dataset", "X finding conflicts with Z finding") are defined by packages/layers that researchers can contribute to and download from, like a library/buffet. (The latter option could allow "curation" by journals, scientific associations, or anyone else.) However, I think the narrower system I initially described would not suffer from this problem to the

Thank you! Joined both and looking forward to reading your posts! 

So glad to hear that, and thanks for the added reference to letsfund!

On peer review I agree with Edo's comment, I think it's more about setting a standard than about improving specific papers.

On IP, I think this is very complex and I think "IP issues" can be a barrier both when something is protected and when it's not. I have personally worked in the periphery of projects where failing to protect/maintain IP has been the end of road for potentially great discoveries, but also seen the other phenomena where researchers avoid a specific area because someone ... (read more)

Answer by C TilliDec 03, 202017
0
0

I am so grateful for WAMBAM, the mentorship program for women, trans and non-binary people in EA. It is so well-run and well-thought through, and it has really helped me develop professionally and personally and also made me a lot more connected to the international EA community.

I am also really grateful that the EA Forum exists!

I can obviously only speak for myself, but for me just having this kind of conversation is in itself very comforting since it shows that there are more people who think about this (i.e. it's not just "me being stupid"). Disagreement doesn't seem threatening as long as the tone is respectful and kind. In a way, I think it rather becomes easier to treat my own thoughts more lightly when I see that there are many different ways that people think about it.

Actually my concerns are more practical, along the lines of Roberts comment, that this kind of thinking could be bad for mental health and, indeed, long-term productivity and impact. If the perception of self-worth didn't seem important for mental health, I would not care much about it.

But it would be a sad scenario if we look back in 50 years and see that the EA movement has led to a lot of capable, ambitious people burning out because we (inadvertently) encouraged (or failed to counteract) destructive thought patterns.

I don't think there is a... (read more)

2
Ramiro
6mo
It's kind of sad to revisit this discussion during SBF's trial

I think I mostly agree with this, and I'd also like to clarify that I don't think this problem originates from EA or from my contact with EA. It is not that I feel that "EA" demands too much of me, rather that when I focus a lot on impact potential it becomes (even more) difficult to separate self-worth from performance.

Different versions of contingent self-worth (contingent self-esteem, performance-contingent self-esteem - there are a lot of similar concepts and I am not completely sure about which terms to use, but basically the conce... (read more)

Interesting thought. I'm not sure if what I had was the mainstream understanding of Christianity, but I didn't experience that there was this kind of conflict in the same way. I'd think that the intrinsic value of being created and loved by God was not really something that could pale in comparison to anything. But I don't know, and maybe it's not very important.

I think there is a difference between justifying spending resources on our own wellbeing and being able to feel valuable independent of performance. Feeling valuable is of course related to feeling like we deserve to be spent resources on, but I don't think it's exactly the same.

Thanks a lot for this comment. I feel like I need to read it over again and think more about it, so I don't have a detailed or clever response, but I really appreciate it. The comparison to other things that have mainly or only instrumental value, and how much we actually value those things, was also a new and useful perspective for me.

Thanks for a great post!

Do you have any thoughts on how these kind of interventions compare to other alternative strategies to improve farmed animal welfare, in terms of effectiveness? For example compared to interventions to lower meat consumption generallty?

2
Bella
4y
Thanks for your question! I didn’t go as far as doing a cost-effectiveness analysis on this; I think that there are a lot of uncertainties that would make that quite difficult, but it'd definitely be a good next step for this topic. My guess is that if we purely consider impact on animals then it might come out quite a bit less cost-effective than other interventions, but that if we account for public health benefits as well it might turn out to be comparable in terms of cost-effectiveness. I think the two most important variables that cost-effectiveness would be sensitive to are whether/what kind of welfare adaptations farmers would make, and how effective antibiotic substitutes are. If we’re including impacts on humans then it would also be very sensitive to what proportion of the antibiotic resistance burden comes from antibiotic use on farmed animals!

Yes I think including them in the local activities is the optimal start - just harder remotely and especially now during the pandemic. Thanks for the GWWC-suggestion, that could be a great remote alternative!

Good point. I'm unsure what the best practise in editing previous comments is - I don't want to change it so much that the subsequent comments don't make sense to another reader. Clarified now by leaving in the original number that fits with the reasoning around it while keeping the correction in brackets.

1
Jc_Mourrat
4y
I think it would also be worth keeping in mind how hard it is to make progress on each front. Given that there seems to be widespread non-therapeutic use of antibiotics for farmed animals, and that (I believe) most people have accepted that antibiotics should be used sparingly, I would be surprised if there were no "low-hanging fruits" there. This is not meant as a complete solution, but rather is an attempt to identify our "next best move". I would believe that cheap in-farm diagnostic tools should now be within reach as well, or already existing. Separately from this, I admit being confused about the nature of the question regarding the importance of dealing with over-use of antibiotics in farmed animals. My understanding is that we know that inter-species and horizontal gene transfers can occur, so is the question about knowing how much time it takes? I just don't have a clear model of how I'm supposed to think about it. Should I think that we are in a sort of "race against bacteria" to innovate faster than they evolve? Why would a delay in transfer to humans be a crucial consideration? Is it that antibiotic innovation is mostly focused on humans? Is there such a thing as a human-taylored antibiotic vs. farmed animal antibiotic? I suppose not? I cannot wrap my head around the idea that this delay in transmission to humans is important. So I guess I'm not thinking about it right? [Added later:] Maybe the point is that if some very resistant strain emerges in a farmed animal species, we have time to develop a counter-measure before it jumps to humans?

That was sloppy of me, thanks a lot for the correction! Edited in the comment.

1
Will Bradshaw
4y
Cool, thanks. One comment: while all your caveats about simplified reasoning and so on are well-made and still apply, I would generally be surprised if you could substitute a number like this in your analysis with another number three times the size, without affecting anything else, such that you could make the substitution and leave the wording unchanged. That is to say, if a contribution of 7.5% was "very significant and worth pursuing", I'd expect a contribution of 23% to be extremely significant, and worth making a high (or near-top) priority. Of course, that's the result for 10%, and 10% is just a made-up number. But I think the general point stands.

Edit: I originally made mistakes in the calculation below, have edited to correct this. See comment below by willbradshaw for details of the calculation.


Thanks! I completely agree there are other strong reasons to reduce (or eliminate) factory farming.


About your other comment – I also don’t think the situation is reassuring at all. I think it’s very plausible that the antibiotic use in agriculture could be an important driver of antibiotic resistance.

I think that we need more research on both the jumping of species barriers and on hor... (read more)

5
Will Bradshaw
4y
This feels a bit petty, since I don't really disagree with any of your conclusions, but there are some mistakes in the mathematics here. Let's assume a fraction p of all antibiotics used are used in animals, and a fraction 1−p are used in humans. (In your example, p=0.75.) Let's also assume that antibiotics use in animals is k× as effective at causing a resistance burden in humans per unit antibiotics used. (In your main example, k=0.1.) Then the total resistance burden in humans is given by B=(use in animals)×(burden from animal use)+(use in humans)×(burden from human use), which in algebraic terms is B=p×k+(1−p)×1=1+p(k−1). The fraction of the total burden caused by animal use is then A=pkB. If p=0.75 and k=0.1, this is A=0.0750.325≈23%. So, quite a bit more than 7.5%. If k=0.01 (use in animals is 1% as efficient at causing a resistance burden in humans), then A=0.00750.2575≈3%. If k=0.5, A=60%. So the fraction of the human burden caused by animal use could be quite high even if the per-unit efficiency is quite low.

Thanks a lot for your comments! I don’t have a strong view on what is the best way to reduce the use of antibiotics in agriculture, but it seems important to adapt to the specific context. I live in Sweden where it’s forbidden to use antibiotics for prophylactic or growth-purposes in agriculture, and that works well here, but in some countries a ban might be hard to enforce, or lead to corruption and unmonitored use, or else have very negative consequences for financially vulnerable farmers. I remember reading somewhere about some kind of in... (read more)

Thanks a lot for this post!

-1
Tafob
2y
Unfortunately, there is still racial profiling and this must be recognized. Of course, many ethical circles are now fighting against it and so on, but such cases still occur. Especially in provincial environments. By the way, I found a lot of essays on racial profiling here https://edubirdie.com/examples/racial-profiling/ , you can read them if you are not deep enough into the issue, so the details of it will be more than clear. I hope it will be useful to you.
Answer by C TilliJun 09, 20202
0
0

Hi, since half a year back I am running a foundation focusing on prevention of antibiotic resistance and am working very actively with mapping up the area: parfoundation.org Feel free to reach out if you’d like to have a chat about it! I could also write up a forum post on the subject soon-ish!

This was very interesting for me to read! I would also be very curious to learn if some groups have found successful practical ways to work with improving diversity/making everyone feel welcome and comfortable.