Roughly a decade ago, I spent a year in a developing country working on a project to promote human rights. We had a rotating team of about a dozen (mostly) brilliant local employees, all college-educated, working alongside us. We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded. I got nostalgic and looked up my old colleagues recently. Every single one is living in the West now. A few are still somewhat involved in human rights, but most are notably under-employed (a lawyer washing dishes in a restaurant in Virginia, for example).
I'm torn on this. I'm sure my former colleagues are happier on an individual level. Their human rights are certainly better respected in the West, and the salaries are better. But the potential good that they could have done in their home country is (probably) substantially higher. On my way out, I signed letters of recommendation for each employee, which I later found out were used to pad visa applications. (I am perhaps feeling a bit of guilt over contributing to a developing country's "brain drain" as a result.) After I left, there was a blowup between two of the Western employees over whether to continue supporting emigration. The TL;DR of the disagreement was "It's the nice thing to do, and refusing to support emigration could reduce morale and our ability to hire go-getters" versus "We can't have lasting impact if our ringers keep leaving."
I'm curious about what other EAs have seen in their orgs. Is there any kind of organizational policy that exists on matters like this?
When you start talking about silicon valley in particular, you start getting confounders like AI, which has a high chance of killing everyone. But if we condition on that going well or assume the relevant people won't be working on that, then yes that does seem like a useful activity, though note that silicon valley activities are not very neglected, and you can certainly do better than them by pushing EA money (not necessarily people[1]) into the research areas which are more prone to market failures or are otherwise too "weird" for others to believe in.
On the former, vaccine development & distribution or gene drives are obvious ones which comes to mind. Both of which have a commons problem. For the latter, intelligence enhancement.
Why not people? I think EA has a very bad track record of extreme group think, caused by a severe lack of intellectual diversity & humility. This is obviously not very good when you're trying to increase the productivity of a field or research endeavor. ↩︎