Thanks for posting this, I have a few questions.
Do you have any other metrics besides visiting the website? Is there a link such as "learn more about veganism" that you can track?
Besides anecdotes, do you have evidence/data that the "dog meat" intervention works better than other interventions?
I do worry that while shock value may work for some people, it could push other people further away from veganism (especially if they felt deceived). But, I am unsure how serious (or important) this concern is.
I'm also impressed by this post. HLI's work has definitely shifted my priors on wellbeing interventions.
...We strive to be maximally philosophically and empirically rigorous. For instance, our meta-analysis of cash transfers has since been published in a top academic journal. We’ve shown how important philosophy is for comparing life-improving against life-extending interventions. We’ve won prizes: our report re-analysing deworming led GiveWell to start their “Change Our Mind” competition. Open Philanthropy awarded us money in their Cause
The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%).
Thanks for this Zach. I found it quite thought provoking, especially the quoted sentence.
Based on your model, AGI controlling human resources is much more likely to occur than extinction. Given that, what events do you think we should be worried about with losing autonomy over resources (and potentially institutions) and are you more concerned about that after this work?
I take 5%-60% as an estimate of how much of human civilization's future value will depend on what AI systems do, but it does not necessarily exclude human autonomy. If humans determine what AI systems do with the resources they acquire and the actions they take, then AI could be extremely important, and humans would still retain autonomy.
I don't think this really left me more or less concerned about losing autonomy over resources. It does feel like this exercise made it starker that there's a large chance of AI reshaping the world beyond human extinction. ...
Participants donated their own money. They received a bonus of £1 and could choose how much of it they wanted to keep or donate.
This is a fantastic idea. Congratulations to all involved.
Out of curiosity, does GD have any data on whether other religions donate a portion of their tithe/tzdaka/etc to GD?
This is fantastic news!
As an experimental economist, I hope this has spillovers to our field (as well as others).
At the feedback level (referee reports, presentations etc), I believe there is significantly more value to be gained when discussing the experimental design itself before any data is collected.
Congrats to Hauke, Chris, and all others involved.
Thanks David, that would be great! I'll check to see if there is a way to run it on STATA, but if not I can just run it on R.
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn't replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn't replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
Thanks Scott, that's a really good point.
One of the variables we thought about manipulating was "who is the demand coming from"? The use of language here "I", "We" and other expressions could easily make a difference (social norms are usually presented in terms of "X% of people believe").
Unfortunately, we didn't have the budget to test whether how much of a difference (if any) this made. It would definitely be worth following up on if we were able to get the funding.
Thanks Ariel. That's a great question.
We checked a number of different correlations cross both studies, including altruistic type, how utilitarian they are, guilt, how manipulated they felt, agreeableness, and a number of demographic characteristics including religion.
We didn't find anything in our regression analysis that stood out. However, we reported everything in the appendix, which can be accessed in the paper. Alternatively, I can send it to you.
I guess another question is who is the obligation coming from? In our experiments it wa...
In that case, a better title would probably be something like "Tell people why they should donate, not that they morally obligated to."*
I had a strong prior that telling people they were morally obligated to donate would not have a positive effect and if anything backfire. So I have actually updated a bit in the other direction regarding the backfire effect.
However, given we have evidence that moral demandingness didn't produce any positive outcomes, I would currently tell people not to use them and instead stick to moral arguments ...
That's very fair! I'm not familiar with the norms for EA Forum title posts. What do you think a better title would be?
Thanks for sharing your proposal Michael. The institute looks great. Finding ways to incentivise replication is something I consider to be really important.
A couple of questions. I am curious what probability you would place on the Institute significantly increasing acceptances of replications in top journals? More abstractly, I wonder if a dedicated instituted could help change social norms in academia around replication. Do you have any thoughts about this?
Lastly, did you receive any feedback from FTX?
Interesting. I wonder if the mechanism is similar to making a donation when there is matching. As in, people think they are giving more money to the cause because their donation is 'doubled'. By providing matching funds they might believe they are going to bring more money in. Alternatively, if they see GM as a public good itself (and like this idea), they have some preference to fund it for that sake.
Would love to know more about this!
Congratulations Lucius, these are pretty amazing results. I am quite surprised that on the extensive margin 38% of people contributed to the matching system. What were your priors about contributions and did this also surprise you?
I had the same intuition as RhysSouthan that most people who acquire the second vote in a Demeny voting structure would use the two votes for the same party/candidate/policy . I think an important facet here is that the salience of the vote being for the 'future generation' may nudge people on the margin to use both votes for the policy/party that best benefits the future generation, whereas without receiving the second vote they may not have voted this way. The Kochi University of Technology Research Institute of Future Design have some papers t...
I think this is a great idea. I agree that it is much easier to shift giving within a cause area than between cause areas.
I do wonder if there are ways to build in cross-causal giving using this platform. For example, I am curious whether the giving multiplier mechanism would be an effective way to achieve both 1) increased effectiveness within climate change donations and 2) substitute some donations to other cause areas. However, I would be hesitant to include this straight away, but if the EEA EF gets momentum it is something to consider.
Als... (read more)