P

PeterBrietbart

523 karmaJoined Sep 2014

Comments
8

Hi GWWC folks! Just wanted to extend a hearty thanks to you on behalf of the Happier Lives Institute. We appreciate that you looked into us and we respect your reasons for wanting to come back to us later. 

Naturally, we're doing our best to make this work out-of-date, make your concerns obsolete, and give you reasons to review our output. We've just dropped our updated psychotherapy report and 2023 giving season recommendations (which keeps StrongMinds and adds AMF!), and I hope you'll enjoy both.

Yep exactly. Nothing set in stone, but 'official' account for top level posts and then named-person accounts for individual views/commentary seems decent.

I'd personally like that, as it summarised only 4 of the 7 points we made.

Hey Nathan! 

Appreciate the comment, and totally agreed on your first paragraph. We'll continue to post updates as the work progresses, and we'll welcome feedback and comments as we go.

On the second paragraph, fear not. Lara wasn't brought on to interface with the community in our stead, but provide comms wisdom and support. We've been discussing creating a formal HLI account through which we'd do future posting, but that would still be us at the keys. 

Agreed. There's a lot of harrowing claims in the piece, but this one had me go "What the fuck" out loud. 

This is really great, and I'm subscribing. 

One bit of feedback is that a summary of the kind

Animal Welfare Fund: April 2022 grant recommendations

Summary & review of grants made in March / April 2022.

doesn't add value -- I'd have hoped to see a bulleted list of what the grants were.

Otherwise great work, and thank you Zoe! 

Thanks for the detailed post! Public posts on projects as personal as this like this can be a bit scary to  write, and I really appreciate the openness and detail. 

As someone who has known him for a fair while, one thing I think this post doesn't quite do justice to is how genuinely lovely and helpful Tee is. Coaching - like therapy - gets a lot of its efficacy from the relationship between coach and coach-ee, and so I'm not surprised by the positive feedback showcased above. 

Tee is unlikely to comment on how great he is, so I'm going to do it here instead :)

Hey Gregory,

Thanks for the in-depth response.

As I'm sure you are aware, this post had the goal of making people in the EA community aware of what we are working on, and why we are working on it, rather than attempting to provide rigorous proof of the effectiveness of our interventions.

One important thing to note is that we’re not aiming to treat long-term anxiety, but rather to treat the acute symptoms of anxiety to help people feel better quickly at moments when they need it. We measure anxiety immediately before the intervention, then the intervention runs, then we measure anxiety again (using three likert scale questions asked both immediately before and immediately after the intervention). At this point we have run studies testing many techniques for quickly reducing acute anxiety, so we know that some work much better than others.

I’ve updated the post with some edits and extra footnotes in response to your feedback, and here are some point by point responses:

How are you recruiting the users? Mturk? Positly?

We recruit paid participants for our studies via Positly.com (which pulls from Mechanical Turk automatically applying extra quality measures and providing us with extra researcher focussed features). Depending on the goals for a study we sometimes recruit broadly (from anyone who wants to participate), and other times specifically seek to recruit people with high levels of anxiety.

Are the "250 uses" 250 individuals each using Mindease once? If not, what's the distribution of duplicates?

This data is from 49 paid study participants who used the app about 5 times total on average over a period of about 5 days (at whatever times they choose).

This particular study targeted users who experience at least some anxiety.

Does "250 uses" include everyone who fired up the app, or only those who 'finished' the exercise (and presumably filled out the post-exposure assessment)?

It's based on only the people completed an intervention (i.e. where we had a pre and post measurement).

Is this a pre-post result? Or is this vs. the sham control mentioned later? (If so, what is the effect size on the sham control?)

This is a pre-post result. In one of our earlier studies we found the effectiveness of the interventions to be about 2x - 2.5x that of the control (13-17 "points" of pre-post mood change versus about 7 for the control). We've changed a lot about our methodology and interventions though, and don't have measurements for the control yet with the new changes.

If pre-post, is the postexp assessment immediately subsequent to the intervention?

Yes. Our goal is to have the user be much calmer by the time they finish the intervention than they were when they started.

"reduces anxiety by 51%" on what metric? (Playing with the app suggests 5-level Likert scales?)

Using the negative feelings (not any positive feelings) reported on 3 likert scale questions. So people who reported no negative feelings at the beginning of the intervention are ignored for analysis since there are no negative feelings reported that we could remove.

Ditto 'feels better' (measured how?)

The 80% success rate refers to whenever a user’s negative feelings are reduced by any amount.

And thank you for telling me your honest reaction, your feedback has helped improve the post.