Whoops-- definitely meant my comment as a response to "what content can be cut?" And the section about activities was meant to show how some of the activities in the current fellowship are insufficient (in my view) & offer some suggestions for other kinds of activities.
Regardless of whether we shift to a radically new model, or we try to revamp the existing structure, I think it'll be useful to dissect the current fellowship to see what content we most want to keep/remove.
Will try to respond to the rest at some point soon, but just wanted to clarify!
TLDR: I agree that content is important, but I don't think the current version of the fellowship does a good job emphasizing the right kind of content. I would like to see more on epistemics/principles and less on specific cause areas. Also the activities can be more relevant.
Longer version: I share some of your worries, Mauricio. I think the fellowship (at least, the version that Penn EA does) currently has three kinds of content:
I think we could cut several of the readings about content and cause areas and replace them with more readings/activities about epistemics and "ways of seeing the world." Based on my experience as a facilitator, I think the readings on principles/epistemics are usually much more valuable than the readings on cause areas. Also, if you get people fired up about the underlying ideas/principles, I think they're inclined to read a bit about specific cause areas on their own. And I also worry a bit about the perception that EA is defined by a core set of cause areas (as opposed to being defined by a core set of principles, which then leads people to some cause areas but there is a lot of disagreement here and we should be open to changing cause areas over time etc etc.)
I also think the exercises in the fellowship could be revamped to be a bit more relevant and applied (e.g., more focus on career planning, independent research skills, redteaming EA research reports or project proposals, developing agency, and converting beliefs into actions).
Examples of new exercises: "take 1 hour to research a topic you're interested in and write a 5-min summary" or "spend 15 minutes brainstorming people who you could talk to in order to address key uncertainties. Then, spend 15 minutes reaching out to them." (Note: Brainstormed these in 5 mins. These are meant to be illustrative rather than polished).
Thank you, Brendon and jh! A few more thoughts/questions below. Feel free to ignore any that are not super relevant or that would take a very long time to address.
It seems to me like the case for MindEase (whether from an impact perspective or from a return-on-investment perspective) rests on MindEase's ability to attract users.
Can you say more where the estimate of 1,000,000 active users came from, as well as the 25% chance of success? At first glance, this strikes me as overly optimistic, given that a) many other well-designed apps have failed to acquire users and b) it seems really hard to compete with the existing mindfulness apps. (See here, here, and here for papers that go into these points in greater detail).
Great work, Ben! I appreciate the actionable suggestions & the structure of the post (i.e., summaries at the top and details in the main body). Excited to see the other posts in this series!
One suggestion: I think it would be helpful to distinguish between interventions that are helpful for people with poor sleep quality (e.g., people with insomnia) and those that are helpful for people with "average" sleep quality (e.g., people who don't have any huge problems with their sleep quality but are trying to optimize their sleep quality).
In other words: let's assume person A has diagnosable insomnia, and person B has "average" sleep quality but is trying to optimize (i.e., by going from 50th percentile sleep quality to 80th percentile). Would you suggest the same intervention for person A and person B?
My understanding is that many of the top recommendations are typically studied for insomnia, but there is much less research supporting their effectiveness for "people with ordinary sleep habits who are trying to optimize" (epistemic status pretty uncertain: I'm not a sleep researcher but have talked with a few about this topic).
A few questions:
I think this content is well-written, so I am praising it publicly! (See what I did there?)
Some strategies that I've heard about or found helpful:
Terrific overview! I'll offer some feedback with the hope that some of it may be helpful:
Big Picture Thoughts
Potentially useful points that I didn't see in the report:
Examples of questions/controversies that HLI could address:
I hope that some of this was helpful & I'm looking forward to seeing future reports!
I think the steelman of the neglectedness argument would be something like: "The less neglected something is, the less likely it is that we would be able to make them do it slightly better."
This is both because (a) it is harder to change the direction of the movement and (b) it is harder to genuinely find meaningful ways to improve the movement.
In (b), I wonder if there are some specific limitations of the current War-on-Drugs movement that would match the skills/interests of (some) EAs.
I'd be curious to learn more about the "types" of EAs that might be best-suited for this work, or how the "EA perspective" could enhance ongoing efforts.
As it stands, the case for scale (i.e., the magnitude of the problem) is very clear. However, I think scale is usually the strongest part of most cause area analyses (i.e., there are a lot of really big problems and it's usually not too difficult to articulate the bigness of those problems, especially using words rather than models). I think the role that EAs would play is less clear (as has been reflected in other comments relating to neglectedness). So, I wonder:
Are there some clear gaps or limitations in the current anti-War-on-drugs movement that could be filled by EA perspectives/skills? (As an example, one of the commentators emphasized that global efforts to legalize drugs may be neglected, and EAs who have skills/interests related to global advocacy might be especially helpful).
What a great opportunity! I wonder if people at SparkWave (e.g., Spencer Greenberg), Effective Thesis, or the Happier Lives Institute would have some ideas. All three organizations are aligned with EA and seem to be in the business of improving/applying/conducting social science research.
Also, I have no idea who your advisor is, but I think a lot of advisors would be open to having this kind of conversation (i.e., "Hey, there's this funding opportunity. We're not eligible for it, but I'm wondering if you have any advice..."). [Context: I'm a PhD student in psychology at UPenn.]
If that's not a good option, you could consider asking your advisor (and other academics you respect) if they know about any metascience/open science organizations that are highly effective [without mentioning anything about your relative and their interest in donating].
Finally, it's not clear to me if the donor is only interested in metascience or if they would also be open to funding "basic science" projects. "Basic science" is broad enough that I imagine it could open up a lot of alternative paths (many of which might be more explicitly EA-aligned than metascience). Examples include basic scientific research on effective giving, animal advocacy, mental health, AI safety, etc. Do you have a sense of how open to "basic science" your relative is, or was basic science just meant as a synonym for metascience?
Finally, good luck on this! :)