Interesting perspective!
I personally believe that many, if not most, of the world's most pressing problems are political problems, at least in part.
I agree! But if this is true, doesn't it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems... unambitious to just accept that?
I also think that many people in EA do work with political questions, and my guess would be that some do it very well - but that most ...
Great discussion! I think perhaps there is some subtle conflict between EA's goal of a "radically better world" and marginal cost effectiveness. For marginal cost effectiveness, I think EA does a good job and the ITN framework is helpful. However, if we want, as CEA states, to contribute to solve "...a range of pressing global problems — like global poverty, factory farming, and existential risk", I think we need to get much more politically involved. I actually think this has happened in EA already and I have sensed a big shift with the focus on AI where ...
Thanks you for this comment - this is indeed very relevant context, much of which I was not previously aware of.
Thanks for commenting!
I think there are two different things to figure out: 1) should we engage with the situation at all? and 2) if we engage, what should we do/advocate for?
I might be wrong about this, but my perception so far is that many EAs based on some ITN reasoning answer the first question with a no, and then the second question becomes irrelevant. My main point here is that I think it is likely that the answer to the first question could be yes?
For this specific case I personally believe that a ceasefire would be more constructive than the alternative, but even if you disagree with that this would not automatically mean that the best thing is not to engage at all. Or do you think it does?
Strongly agree. Of course it's different what works for different people but I think it's a little odd that both EAG and EAGx seem to always be over the weekend, and I would be curious to see how the composition of attendees would shift if an event was held on work days.
Thanks, I'm glad you found it useful!
- Having spent a couple of months working on this topic, do you still think AI science capabilities are especially important to explore, cf AI in other contexts? I ask because I've been thinking and reading a lot about this recently, and I keep changing my mind about the answer.
Answering just for myself and not for the team: I don't have a confident answer to this. I have updated in the direction that capabilities for autonomous science work are more similar to general problem-solving capabilities than I thought previou...
Interesting!
What is your assessment of current risk awareness among the researchers you work with (outside of survey responses), and their interest in such perspectives?
Thank you so much for this post! It is SO nice to read about this in a framing that is inspiring/positive - I think it's unavoidable and not wrong that we often focus on criticism and problem description in relation to diversity/equality issues but that can also make it difficult and uninspiring to work with improvement. I love the framing you have here!
For me Magnify has been super important to balance my idea of what kind of people the EA movement consists of and to feel more at home in the community!
Thanks for this! I've been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you're interested!
Some specific comments:
In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.
Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publication...
Thanks, great to hear =)
I’m quite unsure about which ideas has the best ROI, and I think it would depend a lot on who was looking to execute a project which idea would be most suitable. That said, I’m personallymost excited about the potential of working with research policy at different levels - from my current understanding this just seems extremely neglected compared to how important it could be, and if I’d make a guess about which of these ideas I might myself be working on in a few years it would be research policy.
Short term, I’d be most excited to s...
Cool - my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case? Perhaps we could schedule a call to talk further - I’ll send you a DM!
Interesting. I think a challenge would be to find the right level of complexity of a map like that - it needs to be simple enough to give a useful overview, but complex enough that it models everything that's necessary to make it a good tool for decisionmaking.
Who do you imagine would be the main user of such a mappning? And for which decisions would they mainly use it? I think the requirements would be quite different depending on if it's to be used by non-experts such as policymakers or grantmakers, or if it's to be used by researchers themselves?
Thanks for your comment! I'm uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it's a tricky one for sure, and I agree specific targeted advocacy seem less risky.
As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.
It's difficult to say exactly why, but I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuine...
Thanks a lot for this post! I really appreciate it and think (as you also noted) that it could be really useful also for career decisions, all well as for structuring ideas around how to improve specific organizations.
we must be careful to avoid scenarios in which improving the technical quality of decision-making at an institution yields outcomes that are beneficial for the institution but harmful by the standards of the “better world”
I think this is a really important consideration that you highlight here. When working in an organization my hunch is that...
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research - examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it...
I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that "understanding the universe" can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much "understanding the universe" helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
...So I wouldn't frame it primarily as exploration vs exploitation, but
Hi Miranda! I'm glad you liked it, and I hope you feel better now. Since it's been a while since I wrote this I realize my perspective changes a lot over time - it feels less like a conflict or a problem for me right now, and not necessarily because I have rationally figured something out, it's more like I have been focusing on other things and am generally in a better place. I don't know how useful that is to you or anyone else, but to some extent it might mean that things can sometimes get better even if we don't solve the issue that bothered us in the f...
Notably, my definition is a broader tent (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective.
I hadn't formulated it so clearly for myself, but at this stage I would say I'm using the same perspective as you - I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
...What I mean about this is that I think it's plausible
I think that we have a rather similar view actually - maybe it's just the topic of the post that makes it seem like I am more pessimistic than I am? Even though this post focuses on mapping up problems in the research system, my point is not in any way that scientific research would be useless - rather the opposite, I think it is very valuable, and that is why I'm so interested in exploring if there are ways that it can be improved. It's not at all my intention to say that research, or researchers, or any other people working in the system for that matter,...
Thanks for this!
You make a good point, the part on funding priorities does become kind of circular. Initially the heading there was "Grantmakers are not driven by impact" - but that got confusing since I wanted to avoid defining impact (because that seemed like a rabbit hole that would make it impossible to finish the post). So I just changed it to "Funding priorities of grantmakers" - but your comment is valid with either wording, it does make sense that the one who spends the resources should set the priorities for what they want to achieve.
I think there...
Thank you for this perspective, very interesting.
I definitely agree with you that a field is not worthless just because the published figures are not reproducible. My assumption would be that even if it has value now, it could be a lot more valuable if reporting were more rigorous and transparent (and that potential increase in value would justify some serious effort to improve the rigorousness and transparency).
Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, an...
I think it's a really interesting, but also very difficult, idea. Perhaps one could identify a limited field of research where this would be especially valuable (or especially feasible, or ideally both), and try it out within that field as an experiment?
I would be very interested to know more if you have specific ideas of how to go about it.
So glad to hear that, and thanks for the added reference to letsfund!
On peer review I agree with Edo's comment, I think it's more about setting a standard than about improving specific papers.
On IP, I think this is very complex and I think "IP issues" can be a barrier both when something is protected and when it's not. I have personally worked in the periphery of projects where failing to protect/maintain IP has been the end of road for potentially great discoveries, but also seen the other phenomena where researchers avoid a specific area because someone ...
I am so grateful for WAMBAM, the mentorship program for women, trans and non-binary people in EA. It is so well-run and well-thought through, and it has really helped me develop professionally and personally and also made me a lot more connected to the international EA community.
I am also really grateful that the EA Forum exists!
I can obviously only speak for myself, but for me just having this kind of conversation is in itself very comforting since it shows that there are more people who think about this (i.e. it's not just "me being stupid"). Disagreement doesn't seem threatening as long as the tone is respectful and kind. In a way, I think it rather becomes easier to treat my own thoughts more lightly when I see that there are many different ways that people think about it.
Actually my concerns are more practical, along the lines of Roberts comment, that this kind of thinking could be bad for mental health and, indeed, long-term productivity and impact. If the perception of self-worth didn't seem important for mental health, I would not care much about it.
But it would be a sad scenario if we look back in 50 years and see that the EA movement has led to a lot of capable, ambitious people burning out because we (inadvertently) encouraged (or failed to counteract) destructive thought patterns.
I don't think there is a...
I think I mostly agree with this, and I'd also like to clarify that I don't think this problem originates from EA or from my contact with EA. It is not that I feel that "EA" demands too much of me, rather that when I focus a lot on impact potential it becomes (even more) difficult to separate self-worth from performance.
Different versions of contingent self-worth (contingent self-esteem, performance-contingent self-esteem - there are a lot of similar concepts and I am not completely sure about which terms to use, but basically the conce...
Interesting thought. I'm not sure if what I had was the mainstream understanding of Christianity, but I didn't experience that there was this kind of conflict in the same way. I'd think that the intrinsic value of being created and loved by God was not really something that could pale in comparison to anything. But I don't know, and maybe it's not very important.
I think there is a difference between justifying spending resources on our own wellbeing and being able to feel valuable independent of performance. Feeling valuable is of course related to feeling like we deserve to be spent resources on, but I don't think it's exactly the same.
Thanks a lot for this comment. I feel like I need to read it over again and think more about it, so I don't have a detailed or clever response, but I really appreciate it. The comparison to other things that have mainly or only instrumental value, and how much we actually value those things, was also a new and useful perspective for me.
Thanks for a great post!
Do you have any thoughts on how these kind of interventions compare to other alternative strategies to improve farmed animal welfare, in terms of effectiveness? For example compared to interventions to lower meat consumption generallty?
Yes I think including them in the local activities is the optimal start - just harder remotely and especially now during the pandemic. Thanks for the GWWC-suggestion, that could be a great remote alternative!
Good point. I'm unsure what the best practise in editing previous comments is - I don't want to change it so much that the subsequent comments don't make sense to another reader. Clarified now by leaving in the original number that fits with the reasoning around it while keeping the correction in brackets.
Edit: I originally made mistakes in the calculation below, have edited to correct this. See comment below by willbradshaw for details of the calculation.
Thanks! I completely agree there are other strong reasons to reduce (or eliminate) factory farming.
About your other comment – I also don’t think the situation is reassuring at all. I think it’s very plausible that the antibiotic use in agriculture could be an important driver of antibiotic resistance.
I think that we need more research on both the jumping of species barriers and on hor...
Thanks a lot for your comments! I don’t have a strong view on what is the best way to reduce the use of antibiotics in agriculture, but it seems important to adapt to the specific context. I live in Sweden where it’s forbidden to use antibiotics for prophylactic or growth-purposes in agriculture, and that works well here, but in some countries a ban might be hard to enforce, or lead to corruption and unmonitored use, or else have very negative consequences for financially vulnerable farmers. I remember reading somewhere about some kind of in...
Hi, since half a year back I am running a foundation focusing on prevention of antibiotic resistance and am working very actively with mapping up the area: parfoundation.org Feel free to reach out if you’d like to have a chat about it! I could also write up a forum post on the subject soon-ish!
This was very interesting for me to read! I would also be very curious to learn if some groups have found successful practical ways to work with improving diversity/making everyone feel welcome and comfortable.
Thank you Shaun!
I would think policy work might be spread out over the landscape? As an example, if we think of policy work aiming to establishing the use of certain evaluations of systems, such evaluations could target different kinds of risk/qualities that would map to different parts of the diagram?