[Help please/Updated] Best EA use of $250,000AUD/$190,000 USD for metascience?

What a great opportunity! I wonder if people at SparkWave (e.g., Spencer Greenberg), Effective Thesis, or the Happier Lives Institute would have some ideas. All three organizations are aligned with EA and seem to be in the business of improving/applying/conducting social science research.

Also, I have no idea who your advisor is, but I think a lot of advisors would be open to having this kind of conversation (i.e., "Hey, there's this funding opportunity. We're not eligible for it, but I'm wondering if you have any advice..."). [Context: I'm a PhD student in psychology at UPenn.]

If that's not a good option, you could consider asking your advisor (and other academics you respect) if they know about any metascience/open science organizations that are highly effective [without mentioning anything about your relative and their interest in donating].

Finally, it's not clear to me if the donor is only interested in metascience or if they would also be open to funding "basic science" projects. "Basic science" is broad enough that I imagine it could open up a lot of alternative paths (many of which might be more explicitly EA-aligned than metascience). Examples include basic scientific research on effective giving, animal advocacy, mental health, AI safety, etc. Do you have a sense of how open to "basic science" your relative is, or was basic science just meant as a synonym for metascience?

Finally, good luck on this! :)

The effect of cash transfers on subjective well-being and mental health

Super exciting work! Sharing a few quick thoughts:

1. I wonder if you've explored some of the reasons for effect size heterogeneity in ways that go beyond formal moderator analyses. In other words, I'd be curious if you have a "rough sense" of why some programs seem to be so much better than others. Is it just random chance? Study design factors? Or could it be that some CT programs are implemented much better than others, and there is a "real" difference between the best CT programs and the average CT programs?

This seems important because, in practice, donors are rarely deciding between funding the "average" CT program or the "average" [something else] program. Instead, they'd ideally want to choose between the "best" CT program to the "best" [something else] program. In other words, when I go to GiveWell, I don't want to know about the "average" Malaria program or the "average" CT program-- I want to know the best program for each category & how they compare to each other.

This might become even more important in analyses of other kinds of interventions, where the implementation factors might matter more. For instance, in the psychotherapy literature, I know a lot of people are cautious about making too many generalizations based on "average" effect sizes (which can be weighed down by studies that had poor training procedures, recruited populations that were unlikely to benefit, etc.). 

With this in mind, what do you think is currently the "best" CT program, and how effective is it?


2. I'd be interested in seeing the measures that the studies used to measure life satisfaction, depression, and subjective well-being. 

I'm especially interested in the measurement of life satisfaction. My impression is that the most commonly used life satisfaction measure (this one) might lead to an overestimation of the relationship between CTs and life satisfaction. I think two (of the five) the items could prime people to think more about their material conditions than their "happiness." Items listed below:

  • The conditions of my life are excellent (when people think about "conditions," I think many people might think about material/economic conditions moreso than affective/emotional conditions).
  • So far I have gotten the important things I want in life (when people think about  things they want, I think many people will consider material/economic things moreso than affective/emotional things)

I have no data to suggest that this is true, so I'm very open to being wrong. Maybe these don't prime people toward thinking in material/economic terms at all. But if they do, I think they could inflate the effect size of CT programs on life satisfaction (relative to the effect size that would be found if we used a measure of life satisfaction that was less likely to prime people to think materialistically).


Also, a few minor things I noticed:

1. "The average effect size (Cohen’s d) of 38 CT studies on our composite outcome of MH and SWB is 0.10 standard deviations (SDs) (95% CI: 0.8, 0.13)."

I believe there might be a typo here-- was it supposed to be "0.08, 0.13"?

2. I believe there are two "Figure 5"s-- the forest plot should probably be Figure 6. 


Best of luck with next steps-- looking forward to seeing analyses of other kinds of interventions!

Ask Rethink Priorities Anything (AMA)

What are the things you look for when hiring? What are some skills/experiences that you wish more EA applicants had? What separates the "top 5-10%" of EA applicants from the median applicant?

80k hrs #88 - Response to criticism

Thank you, Denise! I think this gives me a much better sense of some specific parts of the post that may be problematic.  I still don't think this post, on balance, is particularly "bad" discourse (my judgment might be too affected by what I see on other online discussion platforms-- and maybe as I spend more time on the EA forum, I'll raise my standards!). Nonetheless, your comment helped me see where you're coming from.

I'll add that I appreciated that you explained why you downvoted, and it seems like a good norm to me. I think some of the downvotes might just be people who disagree with you. However, I also think some people may be reacting to the way you articulated your explanation. I'll explain what I mean below:

In the first comment, it seemed to me (and others) like you assumed Mark intentionally violated the norms. You also accused him of being unkind and uncurious without offering additional details. 

In the second comment, you linked to the guidelines, but you didn't engage with Mark's claim ("I think this was kind and curious given the context."). This seemed a bit dismissive to me (akin to when people assume that a genuine disagreement is simply due to a lack of information/education on the part of the person they disagree with).

In the third comment (which I upvoted), you explained some specific parts of the post that you found excessively unkind/uncivil. This was the first comment where I started to understand why you downvoted this post.

To me, this might explain why your most recent post has received a lot of upvotes. In terms of "what to make of this," I hope you don't conclude "users should not explain why they downvote." Rather, I wonder if a conclusion like "users should explain why they downvote comments, and they should do so in ways that are kind & curious, ideally supported by specific examples when possible" would be accurate. Of course, the higher the bar to justify a downvote, the fewer people will do it, and I don't think we should always expect downvote-explainers to write up a thorough essay on why they're downvoting. 

Finally, I'll briefly add that upvotes/downvotes are useful metrics, but I wouldn't place too much value in them. I'm guessing that upvotes/downvotes often correspond to "do I agree with this?" rather than "do I think this is a valuable contribution?"  Even if your most recent comment had 99 downvotes, I would still find it helpful and appreciate it!

80k hrs #88 - Response to criticism

Thank you for this post, Mark! I appreciate that you included the graph, though I'm not sure how to interpret it. Do you mind explaining what the "recommendation impression advantage" is? (I'm sure you explain this in great detail in your paper, so feel free to ignore me or say "go read the paper" :D).

The main question that pops out for me is "advantage relative to what?" I imagine a lot of people would say "even if YouTube's algorithm is less likely to recommend [conspiracy videos/propaganda/fake news] than [traditional media/videos about cats],  then it's still a problem! Any amount of recommending [bad stuff that is  harmful/dangerous/inaccurate] should not be tolerated!"

What would you say to those people?

80k hrs #88 - Response to criticism

I read this post before I encountered this comment. I didn't recall seeing anything unkind or uncivil. I then re-read the post to see if I missed anything.

I still haven't been able to find anything problematic. In fact, I notice a few things that I really appreciate from Mark. Some of these include:

  • Acknowledging explicitly that he's sometimes rude to his opponents (and explaining why)
  • Acknowledging certain successes of those he disagrees with (e.g., "I'll give this win to Tristan and Roose.")
  • Citing specific actions/quotes when criticizing others (e.g., the quote from the Joe Rogan podcast)
  • Acknowledging criticisms of his own work 

Overall, I found the piece to be thoughtfully written & in alignment with the community guidelines. I'm also relatively new to the forum, though, so please point out if I'm misinterpreting the guidelines.

I'll also add that I appreciate/support the guideline of "approaching disagreements with curiosity" and "aim to explain, not persuade." But I also think that it would be a mistake to overapply these. In some contexts, it makes sense for a writer to "aim to persuade" and approach a disagreement from the standpoint of expertise rather than curiosity. 

Like any post, I'm sure this post could have been written in a way that was more kind/curious/community-normsy. But I'm struggling to see any areas in which this post falls short. I also think "over-correcting" could have harms (e.g., causing people to worry excessively about how to phrase things, deterring people from posting, reducing the clarity of posts, making writers feel like they have to pretend to be super curious when they're actually trying to persuade).

Denise, do you mind pointing out some parts of the post that violate the writing guidelines? (It's not your responsibility, of course, and I fully understand if you don't have time to articulate it. If you do, though, I think I'd find it helpful & it might help me understand the guidelines better.)

Introduction to the Philosophy of Well-Being

Thank you, Michael! I think this hypothetical is useful & makes the topic easier to discuss.

Short question: What do you mean by "user error?" 

Longer version of the question:

Let's assume that I fill out weights for the various categories of desire (e.g., health, wealth, relationships) & my satisfaction in each of those areas.

Then, let's say you erase that experience from my mind, and then you ask me to rate my global life satisfaction.

Let's now assume there was a modest difference between the two ratings. It is not instinctively clear to me why I should prefer judgment #1 to judgment #2. That is, I think it's an open question whether the "desire-based life satisfaction judgment" or the "desire-free life satisfaction judgment" is the more "valid" response.

To me, "user error" could mean several things:

  • The "desire-free" judgment is flawed because the user is not thinking holistically enough or reflecting enough. They are not thinking carefully about what they care about & how those things have actually went. 
  • The "desire-based" judgment is flawed because the list of desires misses some things that the user actually finds important (i.e., it's impossible to create a comprehensive list)
  • The "desire-based" judgment is flawed because the user is not assigning weights properly (i.e., I might report that wealth matters twice as much to my life satisfaction than friendship, but I might be misperceiving my true preferences, which are better reflected in the "desire-free" case).

In other words, if we could eliminate these forms of user error, I would probably agree with you that this distinction is arbitrary. In practice, though, I think these "desire-based" and "desire-free" versions of life satisfaction ought to be considered distinct (albeit I'd expect them to be modestly correlated). I also don't think it's clear to me that the "desire-based" judgment should be considered better (i.e., more valid). And even if it should be considered better, I think I'd still want to know about the

Furthermore, when making decisions, I would probably want to see both judgments. For example, let's assume:

  • Intervention A improves "desire-based life satisfaction judgments" by 15% and "desire-free life satisfaction judgments" by 5%
  • Intervention B improves "desire-based life satisfaction judgments" by 10% and "desire-free life satisfaction judgments" by 10%
  • Intervention C improves "desire-based life satisfaction judgments" by 15% and "desire-free life satisfaction judgments" by 15%.

I would prefer Intervention C over intervention A, even though they both improve "desire-based satisfaction judgments" by the same amount.  I also think reasonable people would disagree when comparing Intervention A to Intervention B.

For these reasons, I wonder if it's practically useful to consider "desire-based" and "desire-free" life satisfactions as separate constructs.

EAs working at non-EA organizations: What do you do?

Note: You don't have to answer to follow this structure or answer these questions. The point is just to share information that might be helpful/informative to other EAs!

With that in mind, here are my answers:

Where do you work, and what do you do?

  • I am a PhD student studying psychology at the University of Pennsylvania.

What are things you've worked on that you consider impactful?

  • I'm trying to focus my research on topics that are impactful and neglected (e.g., digital mental health, global mental health).
  • I co-developed a mental health intervention for Kenyan adolescents and tested it in a randomized controlled trial.
  • I've published papers reviewing smartphone apps for depression and anxiety (here and here) and developed a new method for analyzing digital health interventions (here).
  • I developed an online mental health intervention designed to teach skills from CBT and positive psychology in <1 hour. We're currently evaluating it in Kenya, India, and the US.
  • I recently started performing research on promoting effective giving. I've received funding from the EA Meta Fund and from UPenn to support this work. Through the project, we're aiming to evaluate an intervention that applies psychological theories to improve effective giving. We'll also be spreading information about EA to 1k+ people, and much of the funding from the project will be donated to effective charities.

What are a few ways in which you bring EA ideas/mindsets to your current job?

  • I work with many undergraduate students. I try to introduce them to EA concepts (e.g., thinking about importance, neglectedness, and solvability when considering projects) and refer them to EA sources (e.g., 80,000 Hours).
  • Several of these students have changed their independent study projects as a result of learning about EA (mostly to work on the effective giving project mentioned earlier).
  • I've casually mentioned effective altruism to graduate students professors I work with, many of whom weren't familiar with EA previously. (Bringing this up "casually" has become easier to do now that I'm doing research relating to effective giving).
  • I've been connecting with members of the EA community who are doing similar work, like members of Spark Wave and the Happier Lives Institute.
Introduction to the Philosophy of Well-Being

Thank you for sharing this post! It's definitely useful to think about different ways of conceptualizing/measuring well-being. Here's one part of the post I wasn't fully convinced by:

"While life satisfaction theories of well-being are usually understood as distinct from desire theories (Haybron, 2016), life satisfaction might instead be taken as an aggregate of one’s global desires: I am satisfied with my life to the extent that it achieves my overall desires about it."

From a measurement perspective, is there evidence suggesting that peoples' judgments of life satisfaction are highly correlated with their achievement of overall desires? I would guess that life satisfaction (at least the way it's operationalized on Diener's scale) would only correlate modestly with one's appraisal of specific desires.

Measurement aside, I still think it may be important to distinguish between "life satisfaction" (i.e., an individual's subjective appraisal of how well their life is going-- which could be influenced by positive affective, desire fulfillment, or other factors) and "satisfaction of global desires." 

The post seems to suggest that "satisfaction of global desires" should be equated with "life satisfaction." I disagree. It seems like having a construct that refers to "an individual's subjective appraisal of their life" is useful, and it seems like people are currently using the term "life satisfaction" to refer to this. Perhaps a new term could be created to refer to "satisfaction of global desires" (for instance, maybe we would call this "objective life satisfaction" as opposed to "subjective life satisfaction", which is what popular life satisfaction scales currently measure).

A new, cause-general career planning process

Ah, I completely missed that paragraph. Thank you for pointing it out, and best of luck as you create more digestible versions! 

After reading the paragraph, I have a few additional thoughts:

  1. I like the idea of a "just the key messages" version that focuses on spreading the ideas rather than why/how to apply them. But I wonder if it'd be even more important to release a version that focuses on the application. My guess is that most EAs who follow 80k hours would benefit more from tools that help them apply these concepts than readings that explain the content to them. My confidence is low, though-- I'm going off of some interactions with EA friends & some general theories of behavior change. What do you think about this assessment (that it's more important to get EAs to apply these concepts in their lives than to explain the key concepts)?
  2. A book seems like a great idea, though I also expect that it'd appeal to the "high-effort" crowd. The more I think about it, the more I think that I really hope some of these become https://www.clearerthinking.org modules :) (in addition to a tool version like the one in the 2017 guide).
  3. I wonder if creating shorter versions might also help you get more feedback, as well as feedback from a different audience. Dismantling the guide into smaller chunks could be helpful at figuring out which parts are most helpful/clear (and perhaps which parts are most worth developing/refining further). Also, if the shorter tools attract a different crowd (i.e., those who aren't as willing to spends days or more making a career plan), the feedback on the "low-effort" version might differ in meaningful ways from the feedback on the high-effort version.

I'm sure there are plenty of initiatives going on at 80k, and I have no idea where "creating new short modules/interactive tools for career planning" would rank on the list. Nonetheless, I think it'd be a valuable idea (potentially more valuable than long guides or "key points" materials that are more informational than applied), and I'd be excited to see/share them if you decide to pursue them.

Load More