NN

Neel Nanda

4417 karmaJoined neelnanda.io

Bio

I lead the DeepMind mechanistic interpretability team

Comments
339

Personally, I find the idea somewhat odd/uncomfortable, but also vaguely buy the impact case, so I've only added it on LinkedIn, as that's the social network where I feel like the norm is shameless signalling and where I tie it least to my identity - I may as well virtue signal rather than just bragging!

This seems a question of what the policy is, not of judgement re how to apply it, in my opinion.

The three examples you gave obviously are in the category of "controversial community drama that will draw a lot of attention and strong feelings", and I trust the mod's ability to notice this. The question is whether the default policy is to make such things personal blog posts. I personally think this would be a good policy, and that anything in this category is difficult to discuss rationally. I do also consider the community pane a weaker form of low visibility, so there's something here already, but I would advocate for a stronger policy.

Another category is "anything about partisan US politics", which I don't think is that hard to identify, is clearly hard to discuss rationally, and in my opinion is reasonable to have a policy of lowering the visibility of.

I don't trust karma as a mechanism, because if the post is something that people have strong feelings about, and many of those feelings are positive (or at least, righteous anger style feelings), then posts often get high karma. Eg I think the Nonlinear posts got a ton of attention, in my opinion were quite unproductive and distracting, got very high karma, and if they had been less visible I think this would have been good

I agree that this is inconsistent (looks like Ben's Nonlinear post is front page). But my conclusion is that community drama should also be made less visible except to those who opt in, not vice versa. The separate section for community posts was a decent start

I personally set them to equal visibility to normal posts, so this doesn't matter to me. But I don't know the stats for how many forum users do so. If basically all forum users have them invisible then I would consider this stronger censorship

It sounds like you agree it's difficult, you just think EA Forum participants will successfully rise to the challenge?

Which idk, maybe, maybe not, seems high variance - I'm less optimistic than you. And making things personal blog posts makes them less visible to new forum users (hidden by default, I think?) but not to more familiar users who opt in to seeing personal blog posts, which seems great for higher quality conversations. So yeah, idk, ultimately the level of filtering here is very mild and I would guess net good

I think difficult to discuss rationally, and unable to discuss rationally are two completely different things that it's important not to conflate. It just seems very obviously true that posts on US politics are more likely to lead to drama, fighting, etc. There are definitely EAs who are capable of having productive and civil conversations about politics, I've enjoyed several such, and find EAs much better for this than most groups. But public online forums are a hard medium to have such discussions. And I think the moderating team have correctly labelled any such posts as difficult to discuss rationally. Whether you agree with making them less visible is up to you, I personally think it's fairly reasonable

In my opinion that post was bizarrely low quality and off base, and not worth engaging with: EA beliefs do not necessarily imply that the market will drop (I personally think a lot of the risk comes from worlds where AI is wildly profitable and drives a scary competitive race but where companies are making a LOT of money), lots of EAs have finance backgrounds or are successful hedge fund workers earning to give so his claim that no EAs understand finance is spurious, this definitely IS something some EAs have spent a while thinking about, even if we did have a thesis converting it to a profitable trade is hard and has many foot guns, and some of us have better things to do with our time than optimising our portfolio

For what it's worth, my guess is that the best trade is going long volatility, since my beliefs about AI do imply that the world is going to get weird fairly quickly even if I don't know exactly how.

Sure, I agree that under the (in my opinion ridiculous and unserious) accounting method of looking at the last actor, zero is a valid conclusion.

I disagree that small is accurate - I feel like even if I'm being incredibly charitable and say that the donor is only 1% of the overall ecosystem saving the life, we still get to 2000 lives saved, which seems highly unreasonable to call small - to me small is at best <100

What does reclaim give you? I've never heard of it, and the website is fairly uninformative

It looks like the total number of lives saved by all Singer- and EA-inspired donors over the past 50 years may be small, or even zero

This conclusion from the first half of the letter seems unjustified by the prior text?

You seem to be arguing that there's a credit allocation problem, where there's actually many actors who contribute to a bednet saving a life, but Givewell style calculations ignore this, and give all the credit to the donor, which leads to over counting. I would describe this as GiveWell computing the marginal impact, which I think is somewhat reasonable (how is the world different if I donate vs don't donate), but agree this has issues and there are arguments for better credit allocation methods. I think this is a fair critique.

But, I feel like at best this dilutes the impact by a factor of 10, maybe 100 at an absolute stretch. If we take rough estimates like 200,000 lives saved via GiveWell (rough estimate justified in footnote 1 of this post), that's still 20,000 or 2,000 lives saved. I don't see how you could get to "small or even zero" from this argument

Load more