Sorted by New

Wiki Contributions


The State of the World — and Why Monkeys are Smarter than You

I got 13/13.

q11 (endangered species) was basically a guess. I thought that an extreme answer was more likely given how the quiz was set up to be counterintuitive/surprising. Also relevant: my sense is that we've done pretty well at protecting charismatic megafauna; the fact that I've heard about a particular species being at risk doesn't provide much information either way about whether things have gotten worse for it (me hearing about it is related to things being bad for it, and it's also related to successful efforts to protect it).

On q6 (age distribution of population increase) I figured that most people are age 15-74 and that group would increase roughly proportionally with the overall increase, which gives them the majority of the increase. The increase among the elderly will be disproportionately large, but that's not enough for it to be the biggest in absolute terms since they're only like 10% of the population.

On q7 (deaths from natural disaster) I wouldn't have been surprised if the drop in death rate was balanced out by the increase in population, but I had an inkling that it was faster. And the tenor of the quiz was that the surprisingly good answer was correct, so if population growth had balanced it out then probably it would've asked about deaths per capita rather than total deaths.

Getting money out of politics and into charity

For example: If there are diminishing returns to campaign spending, then taking equal amounts of money away from both campaigns would help the side which has more money.

Michael_Wiebe's Shortform

If humanity goes extinct this century, that drastically reduces the likelihood that there are humans in our solar system 1000 years from now. So at least in some cases, looking at the effects 1000+ years in the future is pretty straightforward (conditional on the effects over the coming decades).

In order to act for the benefit of the far future (1000+ years away), you don't need to be able to track the far future effects of every possible action. You just need to find at least one course of action whose far future effects are sufficiently predictable to guide you (and good in expectation).

The Web of Prevention

The initial post by Eliezer on security mindset explicitly cites Bruce Schneier as the source of the term, and quotes extensively from this piece by Schneier.

[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel
In most of his piece, by “aiming to be mediocre”, Schwitzgebel means that people’s behavior regresses to the actual moral middle of a reference class, even though they believe the moral middle is even lower.

This skirts close to a tautology. People's average moral behavior equals people's average moral behavior. The output that people's moral processes actually produce is the observed distribution of moral behavior.

The "aiming" part of Schwitzgebel's hypothesis that people aim for moral mediocrity gives it empirical content. It gets harder to pick out the empirical content when interpreting aim in the objective sense.

Public Opinion about Existential Risk

Unless a study is done with participants who are selected heavily for numeracy and fluency in probabilities, I would not interpret stated probabilities literally as a numerical representation of their beliefs, especially near the extremes of the scale. People are giving an answer that vaguely feels like it matches the degree of unlikeliness that they feel, but they don't have that clear a sense of what (e.g.) a probability of 1/100 means. That's why studies can get such drastically different answers depending on the response format, and why (I predict) effects like scope insensitivity are likely to show up.

I wouldn't expect the confidence question to pick up on this. e.g., Suppose that experts think that something has a 1 in a million chance and a person basically agrees with the experts' viewpoint but hasn't heard/remembered that number. So they indicate "that's very unlikely" by entering "1%" which feels like it's basically the bottom of the scale. Then on the confidence question they say that they're very confident of that answer because they feel sure that it's very unlikely.

Public Opinion about Existential Risk

That can be tested on these data, just by looking at the first of the 3 questions that each participant got, since the post says that "Participants were asked about the likelihood of humans going extinct in 50, 100, and 500 years (presented in a random order)."

I expect that there was a fair amount of scope insensitivity. e.g., That people who got the "probability of extinction within 50 years" question first gave larger answers to the other questions than people who got the "probability of extinction within 500 years" question first.

EA Survey 2017 Series: Donation Data

I agree that asking about 2016 donations in early 2017 is an improvement for this. If future surveys are just going to ask about one year of donations then that's pretty much all you can do with the timing of the survey.

In the meantime, it is pretty easy to filter the data accordingly -- if you look only at donations made by EAs who stated that they joined on 2014 or before, the median donation is $1280.20 for 2015 and $1500 for 2016.

This seems like a better way to do the analyses. I think that the post would be more informative & easier to interpret if all of the analyses used this kind of filter. (For 2016 donations you could also include people who became involved in EA in 2015.)

For example, someone who hears a number for the median non-student donation in 2016 will by default assume that this refers to people who were non-student EAs throughout 2016. If possible, it's good to give the number which matches the scenario that they're imagining rather than needing to give caveats about how 35% of the people weren't EAs yet at the start of 2016. When people hear a non-intuitive analysis with a caveat then they're fairly likely to either a) forget about the caveat and mistakenly think that the number refers to the thing that they initially assumed that it meant or b) not know what to make of the caveated analysis and therefore not learn anything.

EA Survey 2017 Series: Donation Data

It is also worth noting that the survey was asking people who identify as EA in 2017 how much they donated in 2015 and 2016. These people weren't necessarily EAs in 2015 or 2016.

Looking at the raw data of when respondents said that they first became involved in EA, I'm getting that:

7% became EAs in 2017
28% became EAs in 2016
24% became EAs in 2015
41% became EAs in 2014 or earlier

(assuming that everyone who took the "Donations Only" survey became an EA before 2015, and leaving out everyone else who didn't answer the question about when they became an EA.)

So if we're looking at donations made in 2015, 35% of the people weren't EAs then and another 24% had only just become EAs that year. For 2016, 35% of the people weren't EAs yet at the start of the year and 7% weren't EAs at the end of the year.

(There were similar issues with the 2015 survey.)

These not-yet-EAs can have a large influence on the median, and to a lesser extent on the percentiles and the mean. They would also tend to create an upward trend in the longitudinal analysis (e.g., if many of the 184 individuals became EAs in 2015).

EA Survey 2017 Series: Distribution and Analysis Methodology

This year, a “Donations Only” version of the survey was created for respondents who had filled out the survey in prior years. This version was shorter and could be linked to responses from prior years if the respondent provided the same email address each year.

Are these data from prior surveys included in the raw data file, for people who did the Donations Only version this year? At the bottom of the raw data file I see a bunch of entries which appear not to have any data besides income & donations - my guess is that those are either all the people who took the Donations Only version, or maybe just the ones who didn't provide an email address that could link their responses.

Load More