R

Rina

148 karmaJoined Oct 2022

Comments
16

That's awesome! great work and thank you for sharing.

At an event recently someone gave the example of 'we shouldn't walk on grass because we might kill ants' and how intial reactions were to sort of scoff  but after the first person made his case it was clear they had thought about this lots. I remember the example as illustrating that tension between intial somewhat emotional reactions and then really sitting and thinking about it.

Not sure where I fall on demodex mites, since I think sentience and ability to feel pain are big ones for me but this was still a cool text to use as a reflection!

Super interesting read. Could I ask out of general curiosity how many hours of work were put into this? I know you've called it a shallow report and I see you point to areas where more digging can be done but this already has lots of depth too!

My general sense is that a lot of policy advocacy projects look really well in terms of CEAs as the scope tends to be high but few properly discount for likelihood of success or indeed, as you suggest, actual lobbying costs over time and relevancy, frequency, take up of regulations.

Rina
1y17
3
0

Thanks for this post! I always appreciate the transparecy and lucidity HLI aims to provide in their posts. The advocacy for a wellbeing view is much needed.

Could I add on to Nick's comment and an ask for clarification about  including"Any form of face-to-face psychotherapy delivered to groups or by non-specialists deployed in LMICs." -- it seems in your Appendix B that the studies incorporated in meta-regression include a  lot of individually delivered interventions, do you still use them and if so how/any differently? (https://www.happierlivesinstitute.org/report/psychotherapy-cost-effectiveness-analysis/)

I was also curious about how relevant you think these populations are, again looking at Appendix B,  given one of Simon's critiques about social desirability, which I understand to be essentially saying: StrongMinds recruits women from the general population who meet a certain threshold of depressive symptoms but some women report higher level symptomatology when they do not really have those levels of problems in order to participate (e.g. under the mistaken assumption they might be getting cash transfers). This type of generally recruited and potentially partially biased sample seems a little different than a sample that includes women survivors of torture/ violence/ SA/ in post-conflict settings of which you have a number of RCTs.  Are there baseline mental health scores for all these samples that you could look at? (I'm assuming you haven't yet based on the paragraph on page 26 starting 'The populations studied in the RCTs we synthesize vary considerably...'

Rina
1y50
13
0

Thanks for writing this, I think it's really important and well expressed. Diversity of thought and experiences should be valuable .

I will say as a woman who was skeptical about her personal fit regarding forecasting I found playing around Manilfold Markets and trying to make some forecasts helpful and the community there is friendly. So if anyone wants a easy way to just try for themselves what forecasting might look like head to: https://manifold.markets/

Rina
1y12
6
0

This post has made me realize that it's pretty hard to quickly  find information about recommended charities that includes the number of interventions assessed, the sample size, and a  note on the evidence quality, something like this comes from a RCT that was carried out well or this was pre- post- data with no randomization. I'd expect this in a summary or overview type presentation but I'm not sure how valuable this would be for everyone.  At least for me personally it is, and it's something that  I would use to be more tentative to give or would give less  where evidence is limited. 

Rina
1y21
5
0

Just to clarify,  Berk has deleted his entire Twitter profile rather than these specific tweets. Will be interesting to the results from the upcoming RCT.

Thanks for writing this, this is also something that's been on my mind with some degree of uncertainty.

My confidence for a lot of these reports would increase if I could see the peer review comments and responses or if otherwise the research (in parts or in full) was published in a peer reviewed academic journal. I know a lot of Open Phil reports commission external peer review and if I recall correct the climate change report was also peer reviewed. At the same time some of the comments in that thread implied reviewers had a lot of disagreements and it's hard to say how much of the feedback was responded to or not. To be clear, you don't have to agree with every peer review comment but seeing the responses would increase my confidence.

I'm still left with the impression that most work within EA isn't externally reviewed.

I wonder if some of the recent public award prizes, like Open Phil's cause prioritization one and GiveWell's change our minds somewhat fit here. More for Open Phil but I never found or got a sense of how entries were assessed or ranked. What are the crteria? This me doubt how we assess and valued expertise and research rigor. 

Thanks for your perspective. I think CE's application process is a great example of how to do tiered filtering with different tasks ranging in time and topic. And I think it's valuable  because you get some feedback. Correct me if I'm wrong but CERI/CHERI/SERI have not provided feedback before? 

My low confidence hunch is that for applicants who are self-reflective, like you, there is value in work without feedback too, perhaps in the way that you say but that, at the same time, with a pool of 650 people there's also productivity loss.

Say there's probably at least a bottom e.g. 20% of applicants who are less self-reflective or are less familiar with the x-risk or even writing applications, who get no feedback. And that would be about 130 people, call it an hour each, that's 130 hours of productivity.  Had the application been shorter, say 20 minutes, that means about 43 hours spent or 87 hours saved. I'm sure there's time spent on reviewing applications that's sinking here too. I know people who have spent upwards of 5 hours on their apps for these fellowships and were then frustrated to not know how they did or what was missing. I'm not sure how representative of an experience this is at all, of course.

The other side that feels underutilized to me is 'near miss' candidates. You didn't get a spot but you were near the top. It feels like a real loss to get no feedback at all? 

Load more