C Tilli

I have a background in engineering and entrepreneurship and have previously been running a small non-profit focused on prevention of antibiotic resistance. Received an EA Infrastructure grant for cause exploration in meta-science during 2021-22. I enjoy gardening and beer-brewing. Based in Sweden and currently working full-time at EA Sweden.

Topic Contributions

Comments

Science policy as a possible EA cause area: problems and solutions

Thanks for this! I've been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you're interested!

Some specific comments:

In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.

Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publications wouldn't really change anything fundamental about what science is being done, which makes it seem like a lot of work for limited gains?

I agree with him that we need to split up work. Some people like, enjoy, and are better at teaching. Others, at doing research. I really don’t think one should be requested to do everything. In addition, dedicated science evaluators might help a lot with replication problems, referee quality, and speed…

I think there is something here - I think it could be valuable to have more diverse career paths that would allow people to build on their strengths, rather than just having tasks depending on seniority. It also seems like something where it's not necessary to design one perfect system, but rather that different institutions could work with different models (just like different private companies work with different models of recruitment and internal career paths). I think it would be very interesting if someone would do (have done?) an overview of how this looks today globally, perhaps there are already some institutions that have quite different ways of allocating tasks?

My crux here would be that even though I think this has a potential to make research much more enjoyable to a broader group, it's a bit unclear if it would actually lead to better science being done. I want to think that it would, but I can't really make a strong argument for it. I do think efficiency would increase, but I'm not sure we'd work on more important questions or do work of higher quality because of it (though we might!).

this is probably a consequence of too many people enjoying doing science with respect to the number of available research jobs

You could be right, but it's not obvious to me. I have the impression a lot of people doing science are finding it quite hard and not very enjoyable, especially on junior levels.  It would be very interesting to know more about what attracts people to science careers, and what reasons for staying are - I think it's very possible that status/being in a completely academic social context that makes other career paths abstract plays an important role. Anecdotally, I dropped out of a phd position after one year, and even though I really didn't enjoy it dropping out felt like a huge failure at the time in a way that voluntarily quitting a "normal" job would not. 
 

Improving science: Influencing the direction of research and the choice of research questions

Thanks, great to hear =)

I’m quite unsure about which ideas has the best ROI, and I think it would depend a lot on who was looking to execute a project which idea would be most suitable. That said, I’m personallymost excited about the potential of working with research policy at different levels - from my current understanding this just seems extremely neglected compared to how important it could be, and if I’d make a guess about which of these ideas I might myself be working on in a few years it would be research policy.

Short term, I’d be most excited to see the projects happen that would provide more information (e.g. identify the most important institutions, understand how policy documents actually translate into specific research being done, understanding the dynamics of exisiting contexts where experts and non-academics discuss research agenda, evaluation of existing R&D hubs, etc) - with this information avaliable I would hope it would be possible to prioritise between different larger projects.

I’m curious, what would you yourself think would be most important and/or have the best ROI?

Improving science: Influencing the direction of research and the choice of research questions

Cool - my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case? Perhaps we could schedule a call to talk further - I’ll send you a DM!

Improving science: Influencing the direction of research and the choice of research questions

Interesting. I think a challenge would be to find the right level of complexity of a map like that - it needs to be simple enough to give a useful overview, but complex enough that it models everything that's necessary to make it a good tool for decisionmaking.

Who do you imagine would be the main user of such a mappning? And for which decisions would they mainly use it? I think the requirements would be quite different depending on if it's to be used by non-experts such as policymakers or grantmakers, or if it's to be used by researchers themselves?

Improving science: Influencing the direction of research and the choice of research questions

Thanks for your comment! I'm uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it's a tricky one for sure, and I agree specific targeted advocacy seem less risky.

Can my self-worth compare to my instrumental value?

As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.

It's difficult to say exactly why, but  I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuinely care about me and spend time not just supporting me on high-impact work, but on socially checking in and hanging out, joking or talking about private stuff - that they like me and care about me as a person.

This makes me question the assumptions I made in the post about how feelings of self-worth are created in the religious context. Perhaps even in church the thing is not the abstract idea being "perfect in Gods eyes", but rather the practical experience of feeling loved and accepted by the community and knowing they have your back. If this is right, that's a very good thing as that is something that can be re-created in a non-religious context.

So, if I'd update this post now, I might be able to develop some ideas for how we could work on this:  perhaps a reason to be careful with over-optimizing our interpersonal meetings?

Improving Institutional Decision-Making: Which Institutions? (A Framework)

Thanks a lot for this post! I really appreciate it and think (as you also noted) that it could be really useful also for career decisions, all well as for structuring ideas around how to improve specific organizations.

we must be careful to avoid scenarios in which improving the technical quality of decision-making at an institution yields outcomes that are beneficial for the institution but harmful by the standards of the “better world”

I think this is a really important consideration that you highlight here. When working in an organization my hunch is that one tends to get relatively immediate feedback on if decisions are good for the organization itself, while feedback on how good decisions are for the world and in the long term is much more difficult to get.

For a user seeking to make casual or fast choices about prioritizing between institutional engagement strategies, for example a small consulting firm choosing among competing client offers, it’s perfectly acceptable to eschew calculations and instead treat the questions as general prompts to add structure and guidance to an otherwise intuitive process. Since institutional engagement can often carry high stakes, however, where possible we recommend at least trying a heuristic quantitative approach to deciding how much additional quantification is useful, if not more fully quantifying the decision.

I'm doing some work on potential improvements to the scientific research system, and after reading this post I'm thinking I should try to apply this framework to specific funding agencies and other meta-organizations in the research system. Do you have any further thoughts since posting this regarding how difficult vs valuable it is to attempt quantification of the values? Approximately how time-consuming is such work in your experience?

What would better science look like?

Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research - examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it's feasible to predict the consequences of fundamental research.

What would better science look like?

I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that "understanding the universe" can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much "understanding the universe" helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist). 

So I wouldn't frame it primarily as exploration vs exploitation, but as trying to predict how useful/harmful a given area of  fundamental research - or fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and how - it can also incorporate things like reference class forecasting. 

My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that it’s possible to make very good predictions about the consequences of new discoveries in fundamental research.  I don’t have a strong position/belief regarding this but I’m somewhat skeptical that it’s possible.

Thanks for the reading suggestions, I will be sure to check them out – if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!

And "Steering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtained" reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if that's true, then utilitarianism just wouldn't recommend constantly thinking like a utilitarian. 

Likewise, if it is the case that "Steering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductive", then a sophisticated approach to achieving societal gains just wouldn't actually recommend doing that.

This is more or less my conclusion in the post, even if I don’t use the same wording. The reason why I think it’s worth mentioning potential issues with a (naïve) welfarist focus is that if I’d work with science reform and only mention the utilitarian/welfarist framing, I think this could come across as naïve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere 😊

Load More