How much overlap is there between this book & Singer's forthcoming What We Owe The Past?
I got 13/13.
q11 (endangered species) was basically a guess. I thought that an extreme answer was more likely given how the quiz was set up to be counterintuitive/surprising. Also relevant: my sense is that we've done pretty well at protecting charismatic megafauna; the fact that I've heard about a particular species being at risk doesn't provide much information either way about whether things have gotten worse for it (me hearing about it is related to things being bad for it, and it's also related to successful efforts to protect it).
On q6 (age distributi... (read more)
For example: If there are diminishing returns to campaign spending, then taking equal amounts of money away from both campaigns would help the side which has more money.
If humanity goes extinct this century, that drastically reduces the likelihood that there are humans in our solar system 1000 years from now. So at least in some cases, looking at the effects 1000+ years in the future is pretty straightforward (conditional on the effects over the coming decades).
In order to act for the benefit of the far future (1000+ years away), you don't need to be able to track the far future effects of every possible action. You just need to find at least one course of action whose far future effects are sufficiently predictable to guide you (and good in expectation).
The initial post by Eliezer on security mindset explicitly cites Bruce Schneier as the source of the term, and quotes extensively from this piece by Schneier.
In most of his piece, by “aiming to be mediocre”, Schwitzgebel means that people’s behavior regresses to the actual moral middle of a reference class, even though they believe the moral middle is even lower.
This skirts close to a tautology. People's average moral behavior equals people's average moral behavior. The output that people's moral processes actually produce is the observed distribution of moral behavior.
The "aiming" part of Schwitzgebel's hypothesis that people aim for moral mediocrity gives it empirical content. It gets harder to pick out the empirical content when interpreting aim in the objective sense.
Unless a study is done with participants who are selected heavily for numeracy and fluency in probabilities, I would not interpret stated probabilities literally as a numerical representation of their beliefs, especially near the extremes of the scale. People are giving an answer that vaguely feels like it matches the degree of unlikeliness that they feel, but they don't have that clear a sense of what (e.g.) a probability of 1/100 means. That's why studies can get such drastically different answers depending on the response format, and why (I predict) eff... (read more)
That can be tested on these data, just by looking at the first of the 3 questions that each participant got, since the post says that "Participants were asked about the likelihood of humans going extinct in 50, 100, and 500 years (presented in a random order)."
I expect that there was a fair amount of scope insensitivity. e.g., That people who got the "probability of extinction within 50 years" question first gave larger answers to the other questions than people who got the "probability of extinction within 500 years" question first.
I agree that asking about 2016 donations in early 2017 is an improvement for this. If future surveys are just going to ask about one year of donations then that's pretty much all you can do with the timing of the survey.
In the meantime, it is pretty easy to filter the data accordingly -- if you look only at donations made by EAs who stated that they joined on 2014 or before, the median donation is $1280.20 for 2015 and $1500 for 2016.
This seems like a better way to do the analyses. I think that the post would be more informative & easier to inter... (read more)
It is also worth noting that the survey was asking people who identify as EA in 2017 how much they donated in 2015 and 2016. These people weren't necessarily EAs in 2015 or 2016.
Looking at the raw data of when respondents said that they first became involved in EA, I'm getting that:
7% became EAs in 2017
28% became EAs in 2016
24% became EAs in 2015
41% became EAs in 2014 or earlier
(assuming that everyone who took the "Donations Only" survey became an EA before 2015, and leaving out everyone else who didn't answer the question about when they bec... (read more)
This year, a “Donations Only” version of the survey was created for respondents who had filled out the survey in prior years. This version was shorter and could be linked to responses from prior years if the respondent provided the same email address each year.
Are these data from prior surveys included in the raw data file, for people who did the Donations Only version this year? At the bottom of the raw data file I see a bunch of entries which appear not to have any data besides income & donations - my guess is that those are either all the people who took the Donations Only version, or maybe just the ones who didn't provide an email address that could link their responses.
https://delib.zendesk.com/hc/en-us/articles/205061169-Creating-footnotes-HTML-anchors
It might be possible to fix in a not-too-tedious way, by using find-replace in the source code to edit all of the broken links (and anchors?) at once.
It appears that this analysis did not account for when people became EAs. It looked at donations in 2014, among people who in November 2015 were nonstudent EAs on an earning to give path. But less than half of those people were nonstudent EAs on an earning to give path at the start of 2014.
In fact, less than half of the people who took the Nov 2015 survey were EAs at the start of 2014. I've taken a look at the dataset, and among the 1171 EAs who answered the question about 2014 donations:
40% first got involved in EA in 2013 or earlier
21% first got involved... (read more)
If the prospective employee is an EA, then they are presumably already paying lots of attention to the question "How much good would I do in this job, compared with the amount of good I would do if I did something else instead?" And the prospective employee has better information than the employer about what that alternative would be and how much good it would do. So it's not clear how much is added by having the employer also consider this.
Thanks for looking this up quickly, and good point about the selection effect due to attrition.
I do think that it would be informative to see the numbers when also limited to nonstudents (or to people above a certain income, or to people above a certain age). I wouldn't expect to see much donated from young low- (or no-) income students.
For the analysis of donations, which asked about donations in 2014, I'd like to see the numbers for people who became EAs in 2013 or earlier (including the breakdowns for non-students and for donations as % of income for those with income of $10,000 or more).
37% of respondents first got involved with EA in 2015, so their 2014 donations do not tell us much about the donation behavior of EAs. Another 24% first got involved with EA in 2014, and it's unclear how much their 2014 donations tell us given that they only began to be involved in EA midyear.
My guess (which, like Michael's, is based on speculation and not on actual information from relevant decision-makers) is that the founders of Open Phil thought about institutional philosophy before they looked in-depth at particular cause areas. They asked themselves questions like:
How can we create a Cause Agnostic Foundation, dedicated to directing money wherever it will do the most good, without having it collapse into a Foundation For Cause X as soon as its investigations conclude that currently the highest EV projects are in cause area x?
Do we want to... (read more)
I can't tell what's being done in that calculation.
I'm getting a p-value of 0.108 from a Pearson chi-square test (with cell values 55, 809; 78, 856). A chi-square test and a two-tailed t-test should give very similar results with these data, so I agree with Michael that it looks like your p=0.053 comes from a one-tailed test.
A quick search into the academic research on this topic roughly matches the claims in this post.
Meta-analyses by Allen (1991) (pdf, blog post summary) and O'Keefe (1999) (pdf, blog post summary) defined "refutational two-sided arguments" as arguments that include 1) arguments in favor of the preferred conclusion, 2) arguments against the preferred conclusion, and 3) arguments which attempt to refute the arguments against the preferred conclusion. Both meta-analyses found that refutational two-sided arguments were more persuasive than one-sided ar... (read more)
Have you looked at the history of your 4 metrics (Visitors, Subscribers, Donors, Pledgers) to see how much noise there is in the baseline rates? The noisier they are, the more uncertainty you'll have in the effect size of your intervention.
Could you have the pamphlets only give a url that no one else goes to, and then directly track how many new subscribers/donors/pledgers have been to that url?
Pardon my negativity, but I get the impression that you haven't thought through your impact model very carefully.
In particular, the structure where
is selecting for mediocrity.
Given fat tails, I expect more impact to come from the single highest impact week than from 36 weeks of not-last-place impact.
Perhaps for the season finale you could bring back the contestant who had the highest imp... (read more)