Comments7


Sorted by Click to highlight new comments since:

Wow. This is great.

I've been looking for a write-up like this for a long time. And thanks for formatting it so well (sections, subsections, effect sizes, hyperlinks, 250+ references).

It's a bit depressing that so many of the effect sizes for interventions with a strong base of evidence are relatively small. I guess there's part of me that wants a silver bullet, but I know well enough that no such thing exists — at least not broadly across the population. Nonetheless, I'm guessing people could get a lot out of experimenting with and implementing many of these.

I'm looking forward to digging more into this!

Awesome work. I'm hoping this approximates the best one could hope for, but I also speculate there are limitations impossible to get around: (a) appropriateness of intervention - how do I know if e.g. ACT makes sense for me? Are there indicators, or triggers, likely to make it useful? The more particular the intervention the more pertinent this seems (in contrast with e.g. 'exercise'), and (b) the typical psychology study may be a pretty different environment from some ideal case where someone agentically, systematically tries lots of things. To address both of these items, I think one would have to become more educated on these 'interventions' in order to think through them and make better decisions. I notice that Ngo's self-paced AGI safety fundamentals curriculum sorta does what I have in mind for AI safety--curated readings, ideally paired with discussion, sustained over several weeks to give one a sort of baseline for sense-making. Perhaps Effective Self-Help could look into structuring a similar curriculum in the future.

Thank you! 
I agree your points are relevant concerns. :)

I think David Althaus addresses them well in his comment from another Forum post that I cite in the section Suggestions on how to implement our advice on interventions.

I intentionally tried to include details on interventions, when I found them, such that one could get an idea of whether the intervention is suited given one’s personal situation, characteristics or preferences.

But indeed, methodically trying out the recommendations seems like quite a good way to get that knowledge, as I write in the paragraph “So we suggest you adopt an open-minded and curious approach…” from the same section. The suggestion I give there to first try a Multi-component Positive Psychology intervention is inspired by this consideration, and for instance the first book and first online program I recommend for this intervention take this approach (i.e. a multi-week and relatively methodical exploration of some of the main happiness-enhancing strategies).

This is fantastic! Thank you for writing it up!! One small point - would it be worth pulling up the broad advice into the summary? You could come away from reading this downgrading the importance of positive relationships and a general healthy lifestyle

Thanks Gemma! I’ve slightly rephrased and pulled up the bullet point on the broad recommendations to make it clear that I also think they’re central. :)

[anonymous]1
0
1

Thank you for this!

But, uh, Best Possible Self sounds awful. Surely you're just setting yourself up for disappointment later on? And from a quick look, it looks like the top study hasn't measured or even considered this?

Just going by anecdote/common-sense, sure, I'd expect an exercise like that to cheer me up. And I think there's something to be said for generally feeling mildly optimistic about life - which I think is compatible with acting realistic on important decisions - and for really trying to stay optimistic in a moment of crisis. But spending hours visualizing this perfect future life sounds like a terrible idea.

YMMV but I'm not even sure I'd put this on the list, let alone at the top.

(But genuinely, thank you for this 🙏 Sorry to be one of those commenters who only has one interesting thing to say about your post and it's kinda negative.)

It's an interesting point, but they're just reviewing the evidence... 

A better exercise to not fall into self-deception is 'mental contrasting', in which you first think about achieving your goals, and then about the obstacles that stand in your way and how to overcome them. It might also help in goal achievement, especially in combination with a technique called 'implementation intention'.[1]

  1. ^

    Wang G, Wang Y and Gai X (2021) A Meta-Analysis of the Effects of Mental Contrasting With Implementation Intentions on Goal Attainment. Front. Psychol. 12:565202. doi: 10.3389/fpsyg.2021.565202

Curated and popular this week
Andy Masley
 ·  · 4m read
 · 
If you’re visiting Washington DC to learn more about what’s happening in effective altruist policy spaces, we at EA DC want to make sure you get the most out of it! EA DC is one of the largest EA networks and we have a lot of amazing people to draw from for help. We have a lot of activity in each major EA cause area and in a broad range of policy careers, so there are a lot of great opportunities to connect and learn about each space! If you're not visiting DC soon but would still like to connect or learn more about the group you should email us at Info@EffectiveAltruismDC.org and explore our resource list!   How to get the most out of DC Fill out our visitor form Start by filling out our visitor form. We’ll get back to you soon with any resources and connections you requested! We’d be excited to chat over a video call before your visit, get you connected to useful resources, and put you in touch with specific people in DC most relevant to your cause area and career interests. Using the form, you can: Connect with the EA DC network If you fill out the visitor form we can connect you with specific people based on your interests and the reasons for your visit. After we connect you, you can either set up in-person meetings during your visit or have video calls ahead of time to get a sense of what's happening on the ground here before you arrive. To connect with more people you can find all our community resources here and on our website. Follow along with EA DC events here.  Get added to the EA DC Slack Even if you’re just in town for a few days, the Slack channel is a great way to follow what’s up in the network. If you’re okay sharing your name and reasons for your DC visit with the community you can post in the Introductions channel and put yourself out there for members to reach out to. Get hosted for your stay We have people in the network with rooms available to sublet, and sometimes options to stay for free. Find an office to work from during the
 ·  · 13m read
 · 
> It seems to me that we have ended up in a strange equilibrium. With one hand, the Western developed nations are taking actions that have obvious deleterious effects on developing countries... With the other hand, we are trying (or at least purport to be trying) to help developing countries through foreign aid... Probably the strategy that we as the West could be doing, is to not take these actions that are causing harm. That is, we don't need to "fix" anything, but we could stop harming developing countries. —Nathan Nunn, Rethinking Economic Development EAs typically think about development policy through the lens of “interventions that we can implement in poor countries.” The economist Nathan Nunn argues for a different approach: advocating for pro-development policy in rich countries. Rather than just asking for more and better foreign aid from rich countries, this long-distance development policy[1] goes beyond normal aid-based development policy, and focuses on changing the trade, immigration and financial policies adopted by rich countries.  What would EA look like if we seriously pursued long-distance development policy as a complementary strategy to doing good? The argument Do less harm Nunn points out that rich countries take many actions that harm poor countries. He identifies three main kinds of policies that harm poor countries: 1. International trade restrictions. Tariffs are systematically higher against developing countries, slowing down their industrialization and increasing poverty. 2. International migration restrictions. By restricting citizens of poor countries from travelling to and working in rich countries, rich countries deny large income-generation opportunities to those people, along with the pro-development effects of their remittances. 3. Foreign aid. This sounds counterintuitive—surely foreign aid is one of the helpful actions?–-but there's sizable evidence that foreign aid can fuel civil conflict, especially when given with
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
Relevant opportunities
26
CEEALAR
· · 1m read