Neel Nanda

I'm a recent graduate, interested in finance and AI. I blog about rationality, motivation, social skills and life optimisation at neelnanda.io

Wiki Contributions

Comments

(Video) How to be a less crappy person

Lastly, I may be alone here, but I am concerned with EA community becoming a little too quickly bound to norms and rules. I would be afraid we could quickly become a dogmatic and siloed group. I would argue the approach in the video above is unique/diverse in the community, and that there is strong value in that 

I agree with the principle of being pro-diversity and anti-dogma in general, but I disagree when it comes to public communications. If someone communicates badly about EA, that harms the movement, can negatively change perceptions, and makes it harder for everyone else doing communication. Eg, 80K over-emphasising earning to give early on. 

I think that divisive and argumentative approaches like this one, as Harrison says, can put a lot of people off and give them a more negative image of EA, and I think this can be harmful to the movement. This doesn't mean that public communication needs to be homogenous, but I do think it's valuable to push back on public communication that we think may be harmful. 

Is effective altruism growing? An update on the stock of funding vs. people

Thanks a lot for the thorough post! I found it really helpful how you put rough numbers on everything, and made things concrete, and I feel like I have clearer intuitions for these questions now.

My understanding is that these considerations only apply to longtermists, and that for people who prioritise global health and well-being or animal welfare this is all much less clear, would you agree with that? My read is that those cause areas have much more high quality work by non EAs and high quality, shovel ready interventions.

I think that nuance can often get lost in discussions like this, and I imagine a good chunk of 80K readers are not longtermists, so if this only applies to longtermists I think that would be good to make clear in a prominent place.

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

A Twitter bot that tweets high impact jobs

The best way for this is to create an issue on github

Fyi this link is broken

Apply to the new Open Philanthropy Technology Policy Fellowship!

This seems like a great initiative, I'm excited to see where this goes!

Do people need to be US citizens (or green card holders etc) to apply for this?

What would you ask a policymaker about existential risks?

Have you spoken at all with the Centre for Long-term Resilience? They work with UK policy makers on issues related to catastrophic and existential risk, and I imagine would be pretty interested in this project.

Inspiring others to do good

Interesting idea! I'm curious to see where this goes. I'm unsure whether I expect most people to perceive this as pretentious, or as admirable/norm-setting

One thing that would significantly put me off using this as is is that I can only choose 3 cause areas (none of which are the ones I most highly prioritise), and can't choose specific charities within each cause area. But if this website isn't aimed at longtermists/highly engaged EAs, maybe this is fine! I believe One for the World do something similar.

What should CEEALAR be called?

The other primary advantage is that the name is quite self-explanatory.

When I hear the name, I picture a hotel chain trying to provide excellent and efficient service. It doesn't feel like it gets to the heart of the EA Hotel for me.

What effectively altruistic inducement prize contest would you like to be funded?

Why is "iterated embryo selection" desirable on EA grounds?

I can see the argument that this let's us improve human intelligence, which eg leads to more technological progress. But it seems unclear whether this is good from an x-risk perspective. And I can see many ways that better control over human genetics can lead to super bad outcomes, eg stable dictatorships.

What are the 'PlayPumps' of cause prioritisation?

This seems like an awesome project!

I'm curious why you're emphasising 'it needs to be obvious, after some thought, that this cause is not worth pursuing at all' as a criteria here. To me, it doesn't really feel like cause prioritisation to first check whether your cause is even helpful. I feel that the harder but more important insight is that 'even if your cause is GOOD, some other causes can be better. Resources are scarce, and so you should focus on the causes that are MORE good'.

To me, one of the core ideas of EA is trying to maximise the good you do, not just settling for good enough. And that's something I'd want to come across in an introductory work. Though it's much harder to make this intuitive, obviously!

Load More