All of gabriel_wagner's Comments + Replies

8-9 more productive hours per week.

This number sounds suspiciously high to me. Do you have any further details? How long did these effects last? Have you done any comparisons to other interventions with similar people, such as using some mental health apps etc?

6
Inga
4mo
Hi Gabriel, I agree that this number seems surprising at first. You can a more in-depth analysis in our main end-of-the-year-report post.  This is how we arrived at the number (N=42): To assess productivity, we employed the Work Productivity and Activity Impairment Questionnaire: General Health V2.0 (WPAI:GH, 2015). This helped to quantify the actual amount of productive hours gained. We basically measured the hours worked as well as the productivity during those hours.  The results: Five hours or 18% more hours are worked overall (pre-mean=23, post-mean=28), and 57% fewer hours are lost due to mental health issues before and right after the program. Also, within the hours worked, productivity was claimed to be 13% less impaired by mental health issues, which is equivalent to 3.6 hours of more work. This mounts up to an overall productivity increase of 8.6 hours or 37% per week. This finding is aligned with the larger increase in executive function we observe. 

Can I ask whether there is a specific reason that you do not put the summary of the findings in this post, but only let people request access to a google drive folder?

4
zeshen
1y
I just browsed through it, their reasons for not doing so is also described in a section in the report. 

Interesting also the second point you bring up on Sociology in Germany! I agree that collaborations between researchers who come with slightly different types of expertise could be super valuable. 

Do you have any ideas how to promote it in practice though? As you say, various incentive structures are not really made for that. I also find that surprisingly often, researchers just really rather want to prove why "their" approach is better, rather than try to understand how another approach could help them better understand the world. 

All this makes me feel slightly pessimistic^^ But I would be super glad to hear ideas on how to overcome these difficulties. 

Hi Anton, glad to hear that you found this post valuable!

On your first question, I think could check out the Sinica Podcast. I believe it is one of the sources on China that is quite accessible, but still tries really hard to go below the surface of issues they cover.  Of course just my personal recommendation.

"EA outreach funding has likely generated substantially >>$1B in value"

Would be curious how you came up with that number. 

2
Linch
1y
It was a very quick lower bound. From the LT survey a few years ago, basically about ~50% of influences on quality-adjusted work in longtermism were from EA sources (as opposed to individual interests, idiosyncratic non-EA influences, etc), and of that slice, maybe half of that is due to things that look like EA outreach or infrastructure (as opposed to e.g. people hammering away  at object-level priorities getting noticed). And then I think about whether I'd a) rather all EAs except one disappear and have 4B more, or b) have 4B less but double the quality-adjusted number of people doing EA work. And I think the answer isn't very close.

Thanks a lot for writing this down with so much clarity and honesty!

I think I share many of those feelings,  but would not have been able to write this.

Something seems a little bit off in this cost-benefit analysis to me. You seem to compare the tiny tiny cost of delaying one breath to the sizable accumulated  impact of 1 billion people doing this for a year.  But that is not really helpful to get an intuition. The tiny tiny cost of delaying breathing once will also accumulate if 1 billion people do this for a year.

Of course, it is still possible that the accumulated cost is lower than the accumulated benefit. But in a way, this whole accumulation does not matter. All that matters is if the cost is higher than the benefit. 

Nice post!

Do you think a person working on this should also have some basic knowledge of ML? Or might it be better to NOT have that, to have a more "pure" outsider view on the behaviour of the models?

3
Buck
2y
I think that knowing a bit about ML is probably somewhat helpful for this but not very important.

I personally think the risks of these videos are relatively low because they do not mention EA. People who are convinced by the ideas in the jokes might start a google search and eventually find EA. Those that feel disgusted by the jokes might just think "what an idiot" and stop there. I doubt they would go on to search about what this is all about, find EA, and then try to act against that.

3
Ben_West
2y
Several of the videos are tagged #effectivealtruism and the first video is currently the second highest video on the tag.

Just wanted to let you know that this was super amusing to read (including the hyper-linked content)! Some nostalgia for time in high-school when I was translating this stuff in Ancient Greek class :D

(I have completely no expertise in AI, but this is what I always felt personally confused about)
How are we going to know/measure/judge whether our efforts to prevent AI risks are actually helping? Or how much they are helping? 

Hi Michael, great to hear you are interested in the intersection of EA work and China and have expertise to bring in!

You may be interested in our Slack community, the form of interest is here: https://airtable.com/shr4E1GeNid3qEjuZ

2
Michael Kehoe
2y
Thank you Gabriel! I will submit the form you shared now.