Wiki Contributions


Lessons from Running Stanford EA and SERI

Let me note that on top of all your concrete accomplishments, you're just a very sweet and caring person, which has got to help a lot in building this vibrant community. I'm happy to know you!

Towards a Weaker Longtermism

There is nothing special about longtermism compared to any other big desideratum in this regard.


I'm not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven't rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.

One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn't justify torturing hundreds of thousands of people. With the strong longtermist claims it's much less clear that there's any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn't completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside). 

What are the long term consequences of poverty alleviation?

I think the persistence studies stuff is the best bet. One thing to note there is that the literature is sort of a set of existence proofs. It shows that there are various things that have long-term impacts, but it might not give you a strong sense of the average long-term impact of poverty alleviation.

Welfare Footprint Project - a blueprint for quantifying animal pain

This is really impressive work. I've been looking for something like this to cite for economics work on animal welfare, and this seems well-suited for that.

"Existential risk from AI" survey results

I just wanted to give major kudos for evaluating a prediction you made and very publicly sharing the results even though they were not  fully in line with your prediction.

Is there evidence that recommender systems are changing users' preferences?

Thanks. I'm aware of this sort of argument, though I think most of what's out there relies on anecdotes, and it's unclear exactly what the effect is (since there is likely some level of confounding here).

I guess there are still two things holding me up here. (1) It's not clear that the media is changing preferences or just offering [mis/dis]information. (2) I'm not sure it's a small leap. News channels' effects on preferences likely involve prolonged exposure, not a one-time sitting. For an algorithm to expose someone in a prolonged way, it has to either repeatedly recommend videos or recommend one video that leads to their watching many, many videos. The latter strikes me as unlikely; again, behavior is malleable but not that malleable. In the former case, I would think the direct effect on the reward function of all of those individual videos recommended and clicked on has to be way larger than the effect on the person's behavior after seeing the videos. If my reasoning were wrong, I would find that quite scary, because it would be evidence of substantially greater vulnerability to current algorithms than I previously thought.

Is there evidence that recommender systems are changing users' preferences?

Right. I mean, I privilege this simpler explanation you mention. He seems to have reason to think it's not the right explanation, but I can't figure out why.

Is there evidence that recommender systems are changing users' preferences?

BTW, I am interested in studying this question if anyone is interested in partnering up. I'm not entirely sure how to study it, as (given the post) I suspect the result may be a null, which is only interesting if we have access to one of the algorithms he is talking about and data on the scale such an algorithm would typically have.

My general approach would be an online experiment where I expose one group of people to a recommender system and don't expose another. Then place both groups in the same environment and observe whether the first group is now more predictable. (This does not account for the issue of information, though.)

I scraped all public "Effective Altruists" Goodreads reading lists

It seems that dystopian novels are overrepresented relative to their share of the classics. I'm curious for others' thoughts why that is. I could imagine a case that they're more action-relevant than, e.g., Pride and Prejudice, but I also wonder if they might shape our expectations of the future in bad ways. (I say this as someone currently rereading 1984, which I adore...)

Two Nice Experiments on Democracy and Altruism

Links should be fixed! Thanks for pointing this out.

Load More