I am an academic economist and a 'Distinguished Researcher' at Rethink Priorities (https://www.rethinkpriorities.org/our-team)

My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.

I'm working to impact EA fundraising and marketing; see https://daaronr.github.io/ea_giving_barriers/index.html, innovationsinfundraising.org, and giveifyouwin.org.

Twitter: @givingtools


Possible misconceptions about (strong) longtermism

In response, you stated:

However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future.

It might be worth re-stating this. Thinking about objective functions and constraints, either

R1. SLT implies that resources should be devoted in a way that does less to reduce current suffering (i.e., implies more current suffering than absent SLT) or

R2. SLT does not change our objective function, or it coincidentally implies an allocation that does has no differential effect on current suffering (a 'measure zero', i.e., coincidental result)

R3. SLT implies that resources should be devoted in a way that leads to less current suffering

R3 seems unlikely to be the case, particularly if we imagine bounds on altruistic capacity. And, if there were an approach that could use the same resources to reduce current suffering even more, it already should have been chosen in the absence of SLT.

If R2 is the case then SLT is not important for our resource decision so we can ignore it.

If R1 holds (which seems most likely to me), then following SLT does imply an increase in current suffering, and we are back to the main objection

Possible misconceptions about (strong) longtermism

Possible misconception: “Greaves and MacAskill say we can ignore short-term effects. That means longtermists will never reduce current suffering. This seems repugnant.”

'This seems repugnant' doesn't seem like a justifiable objection to me, so not something an advocate of SLT should be obliged to take on directly.

If I said "this doctor's theory of liver deterioration suggests that I should reduce my alcohol intake, which seems repugnant to me", you would not feel compelled to respond that "actually, some of the things the doctor is advocating could allow you to drink more alcohol".

(I suspect that beyond the "this seems repugnant" there is a more coherent critique -- and that is the critique we should focus on.)

Should EA Buy Distribution Rights for Foundational Books?

Good arguments. I'd personally love if we found a way to move to a different economic model for all information goods. But particularly here, free distribution seems important.

Possibly worth considering: motivating future authors with prize-based incentives; prize based on number of downloads/reads/upvotes of their books. Of course the authors may be credit-constrained, but perhaps others could finance them by buying shares in the future potential prizes?

EA Survey 2018 Series: Donation Data

Thanks Greg, I appreciate the feedback.

Some of this depends on what our goal is here. Is it to maximize 'prediction' and if so, why? Or is it something else? ... Maybe to identify particularly relevant associations in the population of interest.

For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.

But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:

The model you end up with, which does a great job at predicting your outcome

  1. … may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …

  2. … may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise

  3. … may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’

  4. Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)

A list of EA-related podcasts

Thanks for this. I've integrated this list as well as @pablo's and a couple I added ('Not Overthinking' and 'Great.com Talks With') into an Airtable

View only

or you can on collaborate this base HERE

LSE EA’s Fellowship Application Scores Moderately Predicted Engagement and Discussion Quality

I think we tend to confuse 'lack of strong statistical significance' with 'no predictive power'.

A small amount of evidence can substantially improve our decision-making...

... even if we cannot conclude that 'data with a correlation this large or larger would be very unlikely to be generated (p<0.05) if there were no correlation in the true population.

  1. We, very reasonably, substantially update our beliefs and guide our decisions based on small amounts of data. See, e.g., the 'Bayes rule' chapter of Algorithms to Live By

  2. I believe that for optimization problems and decision-making problems we should use a different approach both to design and to assessing results... relative to when we are trying to measure and test for scientific purposes.

This relates to 'reinforcement learning' and to 'exploration sampling'.

We need to make a decision in one direction or another, and we need to consider costs and benefits of collecting and using these measures I believe we should be taking a Bayesian approach, updating our belief distribution,

... and considering the value of the information generated (in industry, the 'lift', 'profit curve' etc) in terms of how it improves our decision-making.

Note: I am exploring these ideas and hoping to learn, share and communicate more. Maybe others in this forum have more expertise in 'reinforcement learning' etc.

Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement

Thanks for sharing the confidence intervals. I guess it might be reasonable to conclude from your experience that the interview scores have not been informative enough to justify their cost.

What I am saying is that it doesn't seem (to me) that the data and evidence presented allows you to say that. (But maybe other analysis or inference from your experience might in fact drive that conclusion, the 'other people in San Francisco' in your example.)

But if I glance at just the evidence/confidence intervals it suggests to me that there may be a substantial probability that in fact there is a strongly positive relationship and the results are a fluke.

On the other hand I might be wrong. I hope to get a chance to follow up on this:

  • We could simulate a case where the measure has 'the minimum correlation to the outcome to make it worth using for selecting on', and see how likely it would be, in such a case, to observe the correlations as low as you observed

  • Or we could start with a minimally informative 'prior' over our beliefs about the measure, and do a Bayesian updating exercise in light of your observations; we could then consider the posterior probability distribution and consider whether it might justify discontinuing the use of these scores

Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement

Interesting, but based on the small sample and limited range of scores (and I also agree with the points made by Moss and Rhys-Bernard) ...

I'm not sure whether you have enough data/statistical power to say anything substantially informative/conclusive. Even saying 'we have evidence that there is not a strong relation' may be too strong.

To help us understand this, can you report (frequentist) confidence intervals around your estimates? (Or even better, a Bayesian approach involving a flat but informative prior and a posterior distribution in light of the data?)

I'll try to say more on this later. A good reference is: Harms and Lakens (2018), “Making ‘null effects’ informative: statistical techniques and inferential frameworks”

Also, even 'insignificant' results may actually be rather informative for practical decision-making... if they cause us to reasonably substantially update our beliefs. We rationally make inferences and adjust our choices based on small amount of data all the time, even if we can't say something like 'it is less than 1% likely that what I just saw would have observed by chance'. Maybe 12% (p>0.05 !) of the time the dark cloud I see in the sky will fade away, but seeing this cloud still makes me decide to carry an umbrella... as now the expected benefits outweigh the costs..

EAG survey data analysis

Can you share the data and or code?

Load More