Book up on tax law to offer targeted tax advice -- It’s plausible that many EAs often have financial situations more complicated than “I make $X/year in salary and I want to donate some % of it” while having much less money than Good Ventures (so having lawyers on retainer is not a live possibility).
- Evidence for: Even somebody whose only job is at BigTech usually has relevant compensation in at least three different buckets (salary/bonuses, stock, 401K matching). I can imagine situations where an one hour consultation (or reading a 20 minute blog post) is at least as helpful as a 2-3 hour consultation with a tax attorney who does not have the relevant EA context, as well as possibly seeing things that some conventional tax attorneys would just miss.
- Other evidence: complications from startup exits, cryptocurrency, consulting work
I can imagine that as a community we can support ~2-5 people specializing in US tax law as the movement grows, and maybe part-time people in the tax law of other countries (probably not a bad general option for earning-to-give if it turns out your time is only needed partway through the year).
Cognitive Enhancement through genetic engineering - Plausibly very important in general, but I think 1-5 people is a good start. When South Bay EA did a meetup about this, I think we’ve broadly concluded (note, did not formally poll. Just my read of the room) that both of the following statements seem to have a >50% chance of being true:
- In an ideal world, it’s better to have human cognitive enhancement before AGI
- If cognitive enhancement has to happen at all, it’s quite important that it’s done well.
I think it’s plausible (~30% credence, did not think about this too deeply) that human cognitive enhancement has comparable importance to bio x-risk, and I basically never hear about people going into it for EA reasons, possibly because of the social/political biases of being a mostly Western, center-left movement.
Farmed Animals Genetic Engineering - For a movement that pride ourselves on jokes about hedonium and rats on heroin, I don’t think I know anybody who works on genetically engineering animals to suffer less. This only matters in the conjunction of a) near time AGI doesn’t happen, b) farmed animal suffering matters a lot (in both a moral and epistemic sense), c) clean/plant-based meat will not have high adoption within a generation, d) it’s technically possible for you to engineer animals to suffer less in a cost-effective manner and e) there is enough leeway in the current system that lets you do so. Even with that in mind, I still think nonzero AR/AW people should investigate this. For d) in particular, I will personally be very surprised if you can’t engineer chickens to suffer 1% less given approximately the same objective external conditions, and will not be too surprised if you can reduce chicken suffering by 50%.
I think there are obvious biases for why animal rights activists go into clean meat rather than engineering animals to feel less pain, so the fact that this path probably does not currently exist should not be surprising.
Micro-optimizations for user happiness within large tech companies. A large portion of your screen time is spent in crafted interactions by a very small number of companies (FB, Google, Apple, Netflix etc). Related to the idea above of targeting animal happiness directly, why aren’t people trying harder to target human happiness directly? It seems like a fair number of EAs are interested in mental health, but all are trying to partly cure *major problems*, rather than consider that a .002 sd change in the happiness of a billion people is a ridiculously large prize.
I know exactly one person (working very part-time) on this. I think there’s a decent chance that a single person who knows how to Get Things Done within a large company can convince execs to let them lead a team to investigate this, and also a decent chance that this is very plausibly doable without substantial technological or cultural changes. These large tech companies already spend hundreds of millions of dollars (if not more) on other ethics initiatives like diversity, fairness, transparency, user privacy, preventing suicides etc. So it’s not at all crazy to me that somebody can manage upwards by crafting a convincing enough pitch* to launch something like this in at least one tech company.
Involvement in various militaries - Pretty speculative. I’ve talked to former (American) military members who think it’s not very impactful, but I still think that prima facie it’d be nice if we had very EA-sympathetic people within earshot of people high in the chains of command in, say, militaries of permanent UN Security Council Members, or technologically advanced places like the IDF.
Content creation/social media marketing. I have some volunteering experience in this, enough to know that this is a non-trivially difficult skill with large quality differences between people who are really good at this vs. average. EA does not currently want to be a mass movement (and probably never will), but assuming that this changes in the next 5-10 years(~15-20%?) , I think having 1-5 people who are good at this skill would be nice to have, and I’d rather not buy our branding on the market.
*Hypothetical example pitch: "we always say that we respect our users and want them to be happy. But as a data-driven firm, we can't just say this and not follow up with measurable results. Here are some suggested relevant metrics of user happiness (citations 1,2,3), and here's the pilot project that increased user happiness in this demographic by .0x standard deviations."
Related news for the suffering engineering idea (but sadly also related for the cognition engineering one).