Falk Lieder

Research Group Leader @ Max Planck Institute for Intelligent Systems, Tübingen
Working (6-15 years of experience)

Bio

I lead the Rationality Enhancement Group at the MPI for Intelligent Systems in Tübingen and will join the psychology department of UCLA as an Assistant Professor in July 2023.

I completed my PhD in the Computational Cognitive Science Lab at UC Berkeley in 2013, obtained a master’s degree in Neural Systems and Computation from ETH Zurich, and completed two simultaneous bachelor's degrees in Cognitive Science and Mathematics/Computer Science from the University of Osnabrück.

Comments
12

Thank you for engaging with and critiquing the cost-effectiveness analysis, Michael! There seem to be a few misunderstandings I would like to correct.

The CEE in the linked Guesstimate looks optimistic to the point of being impossible. Given the quoted numbers of 32 acts of kindness per day with each act producing an average of 0.7 happy hours, that's 22 happy hours produced per person-day of acts of kindness. If you said people's acts of kindness increased overall happiness by 10%, I'd say that sounds too high. If you say it produces 22 happy hours, when the average person is only awake for 17 hours...well that's not even possible.

The value you calculated is the sum of the additional happiness of all the people to whom the person was kind.  This includes everyone they interacted with that day in any way. This includes everyone from the strangers they smiled at, to the friends they messaged, the colleagues they helped at work, the customers they served, their children, their partner, and their parents and other family members.  If you consider that the benefit for the kindness might be benefited over more than a dozen people, then 22 hours of happiness, might be no more than 1-2 hours per person. Moreover, the estimates also take into account that a person who benefits from your kindness today might still be slightly more happy tomorrow.

I am also very skeptical of the reported claim that a one-time intervention of "watching an elevating video, enacting prosocial behaviors, and reflecting on how those behaviors relate to one’s value" (Baumsteiger 2019) can produce an average of 1600 additional acts of kindness per person. That number sounds about 1000x too high to me.

The intervention by Baumsteiger (2019) was a multi-session program that lasted 12 days and involved planning, performing, and documenting one's prosocial behavior for 10 days in a row. The effect sizes distribution in the Guesstimate model is based on many different studies, some of which were even more intensive. 

 

In general, psych studies are infamous for reporting impossibly massive effects and then failing to replicate. 

Most of the estimates are based on meta-analyses of many studies. The results of meta-analyses are substantially more robust and more reliable than the result of a single study.

 

I think you are right that this first estimate was too optimistic. In particular, the probability distribution of the frequency of prosocial behavior is currently based on four estimates from different studies. One of those studies led to an estimate that appears to be far too high. This might be because they defined prosocial behavior more liberally because it involved interactions with children, or because participants knew that they were being observed. I will think about what the more general problem might be and how it can be addressed systematically. 

Thank you, Vasco!

I agree that the optimal percentage of research funding is higher/lower for areas where less/more science and R&D have been done so far. We don't really know yet how different areas correspond to which of the simulated scenarios. I think establishing this correspondence will be a crucial next step in our project. Moreover, some topics and potential interventions within a broad cause area, such as global health and development, might have been researched much less than others. Therefore, it probably makes most sense to apply our analysis at the level of specific research topics or interventions.

Thank you for pointing out that the amount of available resources can change. Contrary to your intuition, I suspect that taking this into account would tilt the analysis in favor of even more research if the amount of funding an area receives depends on the cost-effectiveness and scalability of its best intervention. Successful research results in new interventions that are more cost-effective or more scalable than the best previous ones. This can significantly increase how much the cause area appeals to the EA community and how much finding it can absorb. Suppose, research in a cause area without any highly cost-effective interventions leads to the development of an intervention that is more cost-effective than the best interventions in any other area. That would probably increase the amount of money that will be donated to the cause. Or suppose that the research makes a highly effective intervention much more scalable. That would likely increase the amount of money that will be donated to the corresponding cause as well.

The recommendation of 50% already takes into account that better opportunities will be available in the future. This statistic means that the amount of money we invest into research in total across time and funding agencies should be roughly equal to the total amount of money that has been or will be invested into existing interventions. This global, long‐term 50-50 split can be achieved in many ways. One or more EA funding agencies temporarily investing much more than 50% into research could be a good way to implement it. 

Thank you, Stuart! Your post strongly resonated with me. :) I agree that EA funding agencies should develop farsighted strategic agendas for the entire process of generating and utilizing knowledge and innovations that are crucial for humanity's long-term survival and flourishing. I think this process should start with use-inspired basic research on crucial questions. The next step should be to translate the discoveries of that research into new interventions. Then those interventions should be tested on an increasingly larger scale and rigorously compared against the best existing interventions in terms of their cost-effectiveness. Once we have completed those steps to generate better interventions and knowledge about their effectiveness, we can exploit having a substantially improved set of opportunities to do good. A single program of a single funding agency could support all of these steps and thereby coordinate and guide the process of discovery, intervention development, and evaluation research towards what we most need to maximally improve the future of humanity.  This farsighted strategic approach is not unheard of.  The Development Innovation Ventures program and the Grand Challenges for Human Flourishing program are steps in that direction. I think EA Funding agencies could learn something from the design of those programs and combine their strategic approach with the latest insights and principles of Effective Altruism and good scientific practice. I think that would be extremely valuable.

Thank you for your insightful comments, Marshall!

  1. The simulations do not distinguish between scientific research and R&D projects outside of academia. The relative usefulness of these two types of research is beyond the scope of the model. The main assumption of the simulations is that the research projects are selected strategically for their potential to enable or produce more cost-effective interventions.
  2. I agree that the assumption about the cost-effectiveness of new interventions can and should be validated empirically. Estimating it from historical data is an important direction for future work, and I am planning to pursue it. I think the expected cost-effectiveness will be vastly different depending on the extent to which the research builds on established knowledge and techniques. In the extreme case of refining the best existing intervention, the expected cost-effectiveness of the new intervention would definitely be larger than 50%.

I think your estimate of how costly it would be to run a replication study is too pessimistic. In addition to the issues that MHR identified, it strikes me as unrealistic that the cost of rerunning the data collection would be more than 10,000 times as high as the cost of the original research project. I think this is highly unlikely because data collection usually accounts for at most 10% of the cost of research. Moreover, the cost of data collection does not scale linearly with the number of participants, but linearly in the number of researchers that are paid to coordinate data collection. The most difficult parts of organizing data collection, such as developing the strategy and establishing contact with high-ranking relevant officials, only have to be done once.  Moreover, there are economies of scale such that once you can collect data from 1 school, it is very little effort to replicate the process with 100 or 1000 schools, and that work can then be done by local volunteers with minimal training for minimal pay or free of charge. It certainly won't require 10000 times as many professors, postdocs, and graduate students as the original study, and it is almost exclusively the salaries of those people that makes research expensive. To the contrary, collecting more data on an already designed study with an existing data analysis pipeline requires minimal work from the scientists themselves, and that makes it much less expensive. Therefore, I think that the cost of data collection was probably only 10% of the cost of the research project and only scale logarithmically with the sample size. Based on that line of reasoning, I believe that the replication study could be conducted for one or a few million dollars.

Here is my post on the proof-of-concept that this approach can be applied to predict the cost-effectiveness of funding more research on a specific topic: https://forum.effectivealtruism.org/posts/aRFCrJaozrHearPAh/doing-research-on-promoting-prosocial-behavior-might-be-100

Thank you, Peter! I am working on a proof-of-concept showing that this approach can be used to identify promising research topics and to choose between specific projects. I am planning to post about it next week. I will keep you posted. 

Are you willing to share a list of some or all of regranting programs that were funded in the first round of the FTX Future Fund?

The submission form keeps telling me "Your response is too large. Try shortening some answers." even though the total number of characters is significantly lower than  25,000. What should I do? Would you like me to share a link to a PDF with our answers to all questions or upload a single PDF with all of that information?

Load More