Neel Nanda

I'm a recent graduate, interested in finance and AI. I blog about rationality, motivation, social skills and life optimisation at neelnanda.io

Comments

Cash Transfers as a Simple First Argument

I really like this example! I used in an interview I gave about EA and thought it went down pretty well. My main concern with using it is that I don't personally fund direct cash transfers (or think they're anywhere near the highest impact thing), and both think it can misrepresent the movement, and think that it's disingenuous to imply that EA is about robustly good things like this, when I actually care most about things like AI Safety.

As a result, I frame the example like this (if I can have a high-context conversation):

  • Effectiveness, and identifying the highest impact interventions is a cornerstone of EA. I think this is super important, because there's really big spread between how much good different interventions, much more than feels intuitive
  • Direct cash transfers are a proof of concept: There's good evidence that doubling your income increases your wellbeing by the same amount, no matter how wealthy you were to start with. We can roughly think of helping someone as just giving them money, and so increasing their income. The average person in the US has income about 100x the income of the world's poorest people, and so with the resources you'd need to double the income of an average American, you could help 100x as many of the world's poorest people!
    • Contextualise, and emphasise just how weird 100x differences are - these don't come up in normal life. It'd be like you were considering renting buying a laptop for $1000, shopped around for a bit, and found one just as good for $10! (Pick an example that I expect to resonate with big expenses the person faces, eg a laptop, car, rent, etc)
    • Emphasis that this is just a robust example as a proof of concept, and that in practice I think we can do way better - this just makes us confident that spread is out there, and worth looking for. Depending on the audience, maybe explain the idea of hits-based giving, and risk neutrality.
Concerns with ACE's Recent Behavior

Thanks for sharing, that part updated me a lot away from Ben's view and towards Hypatia's view. 

An aspect I found particularly interesting was that Anima International seems to do a lot of work in Eastern European countries, which tend to be much more racially homogenous, and I presume have fairly different internal politics around race to the US. And that ACE's review emphasises concerns, not about their ability to do good work in their countries, but about their ability to participate in international spaces with other organisations.

They work in: 

Denmark, Poland, Lithuania, Belarus, Estonia, Norway, Ukraine, the United Kingdom, Russia, and France

It seems even less justifiable to me to judge an organisation according to US views around racial justice, when they operate in such a different context.

EDIT: This point applies less than I thought. Looks like Connor Jackson, the person in question, is a director of their UK branch, which I'd consider much closer to the US on this topic. 

Launching a new resource: 'Effective Altruism: An Introduction'

Thanks for the clarification. I'm glad that's in there, and I'll feel better about this once the 'Top 10 problem areas' feed exists, but I still feel somewhat dissatisfied. I think that 'some EAs prioritise longtermism, some prioritise neartermism or are neutral. 80K personally prioritises longtermism, and does so in this podcast feed, but doesn't claim to speak on behalf of the movement and will point you elsewhere if you're specifically interested in global health or animal welfare' is a complex and nuanced point. I generally think it's bad to try making complex and nuanced points in introductory material like this, and expect that most listeners who are actually new to EA wouldn't pick up on that nuance. 

I would feel better about this if the outro episode covered the same point, I think it's easier to convey at the end of all this when they have some EA context, rather than at the start.

A concrete scenario to sketch out my concern:

Alice is interested in EA, and somewhat involved. Her friend Bob is interested in learning more, and Alice looks for intro materials. Because 80K is so prominent, Alice comes across 'Effective Altruism: An Introduction' first, and recommends this to Bob. Bob listens to the feed, and learns a lot, but because there's so much content and Bob isn't always paying close attention, Bob doesn't remember all of it. Bob only has a vague memory of Episode 0 by the end, and leaves with a vague sense that EA is an interesting movement, but only cares about weird, abstract things rather than suffering happening today, and concludes that the movement has got a bit too caught up in clever arguments. And as a result, Bob decides not to engage further.

Launching a new resource: 'Effective Altruism: An Introduction'

Ah, thanks for the clarification! That makes me feel less strongly about the lack of diversity. I interpreted it as prioritising ALLFED over global health stuff as representative of the work of the EA movement, which felt glaringly wrong

Launching a new resource: 'Effective Altruism: An Introduction'

I strongly second all of this. I think 80K represents quite a lot of EAs public facing outreach, and that it's important to either be explicit that this is longtermism focused, or to try to be representative of what happens in the movement as a whole. I think this especially holds for somewhat explicitly framed as an introductory resource, since I expect many people get grabbed by global health/animal welfare angles who don't get grabbed by longtermist angles.

Though I do see the countervailing concern that 80K is strongly longtermism focused, and that it'd be disingenuous for an introduction to 80K to give disproportionate time to neartermist causes, if those are explicitly de-prioritised 

Concerns with ACE's Recent Behavior

Thanks a lot for writing this up and sharing this. I have little context beyond following the story around CARE and reading this post, but based on the information I have, these seem like highly concerning allegations, and ones I would like to see more discussion around. And I think writing up plausible concerns like this clearly is a valuable public service.

Out of all these, I feel most concerned about the aspects that reflect on ACE as an organisation, rather than that which reflect the views of ACE employees. If ACE employees didn't feel comfortable going to CARE, I think it is correct for ACE to let them withdraw. But I feel concerned about ACE as an organisation making a public statement against the conference. And I feel incredibly concerned if ACE really did downgrade the rating of Anima International as a result. 

That said, I feel like I have fairly limited information about all this, and have an existing bias towards your position. I'm sad that a draft of this wasn't run by ACE beforehand, and I'd be keen to hear their perspective. Though, given the content and your desire to remain anonymous, I can imagine it being unusually difficult to hear ACE's thoughts before publishing.

Personally, I consider the epistemic culture of EA to be one of its most valuable aspects, and think it's incredibly important to preserve the focus on truth-seeking, people being free to express weird and controversial ideas, etc. I think this is an important part of EA finding neglected ways to improve the world, identifying and fixing its mistakes, and keeping a focus on effectiveness. To the degree that the allegations in this post are true, and that this represents an overall trend in the movement, I find this extremely concerning, and expect this to majorly harm the movement's ability to improve the world.

Concerns with ACE's Recent Behavior

I interpret it as 'the subgroup of the Effective Altruist movement predominantly focused on animal welfare'

"Insider giving" - An unfortunate donation strategy used by corporate insiders to avoid losses

Interesting! It's not that obvious to me that this is bad. Eg, if this gets people donating stock rather than donating nothing at all, this feels like a cash transfer from the government to charities?

Of course, WHICH charities receive the stock matters a lot here

inflates donation figures.

From the article linked:

And what they find is that "large shareholders’ gifts are suspiciously well timed. Stock prices rise abnormally about 6% during the one-year period before the gift date and they fall abnormally by about 4% during the one year after the gift date, meaning that large shareholders tend to find the perfect day on which to give."

A 4% inflation really doesn't seem that bad? Especially since, as Larks says, charities can sell stock themselves much sooner than a year.

Some quick notes on "effective altruism"

I also find that a bit cringy. To me, the issue is saying "I have SUCCEEDED at being effective at altruism", which feels like a high bar and somewhat arrogant to explicitly admit to

Long-Term Future Fund: Ask Us Anything!

Do you mean this as distinct from Jonas's suggestion of:

Nah, I think Jonas' suggestion would be a good implementation of what I'm suggesting. Though as part of this, I'd want the LTFF to be less public facing and obvious - if someone googled 'effective altruism longtermism donate' I'd want them to be pointed to this new fund.

Hmm, I agree that a version of this fund could be implemented pretty easily - eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I'm not sure how to do it well and ethically.

Load More