3395 karmaJoined Apr 2018Łódź, Polska


Karolina is Co-founder and Co-Executive Director at Charity Entrepreneurship. She also serves as a Fund Manager at the EA Animal Welfare Fund, and as a board member and consultant for various EA nonprofits and think tanks.

LinkedIn: https://www.linkedin.com/in/karolinasarek/


Sorted by New


Thanks! Can you tell me more about why you think improving dissolved oxygen is not a good idea? I still consider poor dissolved oxygen to be a major welfare problem for fish in the setting where the charity is expected to operate, and improving it through various means (assuming we also keep stocking density constant or decreasing it) would be good for their welfare. This has been validated in the field by FWI in this assessment and studied by others, so I’m a bit surprised. Unless you are referring to specific interventions to improve dissolved oxygen, of which I have high uncertainty about their cost-effectiveness.

And about the report you link, I broadly agree and have written about it below

[previous comment is deleted, because I accidentally sent an unfinished one]

Thanks for the example! That makes sense and makes me wonder if part of the disagreement came from thinking about different reference classes. I agree that, in general, the research we did in our first year of operations, so 2018/2019, is well below the quality standard we expect of ourselves now, or what we expected of ourselves even in 2020. I agree it is easy to find a lot of errors (that weren't decision-relevant) in our research from that year. That is part of the reason they are not on the website anymore.

That being said, I still broadly support our decision not to spend more time on research that year. That's because spending more time on it would have come at a cost of significant tradeoff. At the time, there was no other organization whose research we could have relied on, and the alternative to the assessment you mention was either to not compare interventions across species (or reduce it to a simplistic metric like "the number of animals affected" metric) or to spend more time on research and run Incubation Program a year later in which case we would have lost a year of impact and might not have started the charities we did. That would have been a big loss because for example, that year we incubated Suvita whose impact and promise were recently recognized by GiveWell that, provided Suvita with $3.3M to scale up, or we incubated Fish Welfare Initiative (FWI) and Animal Advocacy Careers a decision I still consider to be a good one (FWI is an ACE Recommended Charity, and even though I agree with its co-founders that their impact could be higher, I'm glad they exist). We also couldn't simply hire more staff and do things more in-depth because it was our first year of operation, and there was not enough funding and other resources available for, at the time, an unproven project. 

I wouldn't want to spend more time on that, especially because one of the main principles of our research is "decision-relevance," and the "wild bug" one-pager you mention or similar ones were not relevant. If it were, we would not have settled on something of that quality, and we would have put more time into it.

For what it is worth, I think there are things we could have done better. Specifically, we could have put more effort into communicating how little weight others should put on some of that research. We did that by stating at the top (for example, as in the wild bug one-pager you link), "these reports were 1-5 hours time-limited, depending on the animal, and thus are not fully comprehensive." and at the time, we thought it was sufficient. But we could have stressed epistemic status even more strongly and in more places so it is clear to others that we put very little weight on it. For full transparency, we also made another mistake. We didn't recommend working on banning/reducing bait fish as an idea at the time because, from our shallow research, it looked less promising, and later, upon researching it more in-depth, we decided to recommend it. It wouldn't have made a difference then because there were not enough potential co-founders in year 1 to start more charities, but it was a mistake, nevertheless.  

Thanks for clarifying! We always have an expert view section in the report, and often consult animal science specialists, but it is possible we missed something. Could you tell me where specifically we made a mistake regarding animal science that could have changed the recommendation? I want to look into it, to fact-check it, and if it is right not to make this mistake in the future. 

2. CE's charities working on animal welfare have mostly not been very good, and listening to external feedback prior to launching them would have told them this would happen.

[...] doesn't do CE's original proposed idea anymore

On the point of the charities not doing CE's originally proposed idea anymore, I want to clarify that we don't see charities tweaking an idea as a failure but rather as the expected course of action we encourage. We are aware of the limitations of desktop research (however in-depth), and we encourage organizations to quickly update based on country visits, interactions with stakeholders, and pilot programs they run. There are just some informations that a researcher wouldn't be able to get, and they need input from someone working on the ground. For example, when Rethink Priorities was writing their report on shrimp welfare, they consulted SWP extensively to gain that perspective. Because CE charities operate in extremely neglected cause areas, there is often no other "implementer" our research team can rely on. Therefore, organizations are usually expected to change the idea as they learn in their first months of operations. I see this as a success in ingraining the values of changing one's mind in the face of new evidence, seeking this evidence, and making good decisions on the side of co-founders with the support of their CE mentors, and we are happy when we see it happen. 
There is a complex trade-off to be made when balancing the learning value from more in-depth desktop research vs. more time spent on learning as one implements, and I don't think CE always gets it right, but the latter perspective is often misunderstood and underappreciated in the EA space. 

Regarding charities specifically, in general, we expect about a 2/5 "hit rate" (rarely because of the broad idea being bad, more often because the implementation is challenging for one reason or another), and many people, including external charity evaluators and funders, have a different assessment of some of the charities you list. That being said, if you have any specific feedback about the incubated organization's strategies or ideas, please reach out to them. As you mentioned, they are open to hearing input and feedback. Similarly, if you have specific suggestions about how CE can improve its recommendations, please get in touch with our Director of Research at sam@charityentrepreneurship.com; we appreciate specific feedback and conversation about how we can improve. Thank you for your support of multiple CE charities so far! 

Hi Cecilia! 

We offer up to 2,000 USD per month for the duration of the program. 

The amount may vary from person to person, with some participants choosing not to take a stipend (e.g., those who would take paid time off from work to attend the program) to others taking the maximum amount (because they have to quit their job to attend). 

If you think that amount would not be sufficient to cover your cost of living, please contact us, and we can discuss this on a case-by-case basis. 

I agree (of course ;) ), and that’s what we've noticed as well. Particularly, there are some crucial research skills that are not being taught elsewhere but are commonly used in EA/when one aims to have a significant impact. For example, prioritization research, calculations of cost-effectiveness at different levels of depth, issues of moral weights, etc. We aim to address this gap as well as provide training in generalizable research skills for example literature reviews. If you know people, who are interested in such a training program, feel free to send them information about it. We would love to see applications from them.

We are still finalizing the list as some ideas come from CE (promising ideas we didn’t have the capacity to research in their respective years), but others will come from foundations interested in research that could affect their decisions. Some of the ideas will also be developed during the program as part of learning how to do idea prioritization, and some may come from other partner organizations.

If it helps, we expect that ideas may come from many cause areas CE is focusing on such as global health and development, biosecurity/health security, governance and health governance, farmed animal welfare. In the majority of those areas, we have a mixture of direct delivery, policy, and meta ideas. But the ideas may go beyond that as well depending on input from partner organizations and foundations.

Thanks Shakeel! I think a big value of the program would come from applying learned skills to practical projects and getting a lot of feedback and guidance from expert researchers when doing so. With those sorts of skills, it is best to lean into learning by doing. That makes it somewhat harder to make “public goods” material that will bring a similar amount of value to the program. 

That being said, we are planning to write a research handbook similar to the one we have for the charity incubation program. We probably won’t publish version 1 developed for the upcoming program in October, but we hope to publish version 2 made next year. :) 

Thanks Vaidehi! We have established two Theories of Change (ToCs): an initial ToC for the first, pilot program and another for the program's long-term implementation. With the ambitious goal of piloting this program this year, our priority is to ensure its high value before any potential scaling up, hence the difference between our short-term and long-term ToCs. We are focusing on getting the program up and running, but I will be happy to share the ToC diagram once we are done with the outreach and vetting sprint.

About the expected roles, our curriculum for the upcoming October program is designed to prepare participants for roles in a) grantmaking organizations, b) direct charities in need of research staff, and c) research or evaluation organizations. Training will encompass generalizable research skills, intervention prioritization research, in-depth exploration of specific problem areas and potential interventions within them, as well as conducting external evaluations of charities.

In the longer term, we may expand the variety of research career tracks available.

We also want the research conducted during the program to be immediately applicable and informative for organizational decision-making. So training is not the only output in our ToC, and the other one is producing and disseminating decision-relevant intervention reports and charity evaluations.

I also wanted to quickly check it out while in the office. I played the first 1.5 minutes and it already moved me to tears. I'll have to wait until I get home to watch the whole movie. Thank you soo much Aaron for sharing it! 

Load more