I am a phd student in theoretical computer science.
Animal welfare is the best cause area. AI Safety is, depending on your exact definition, either not neglected or pseudoscience babble.
I have left the EA community around 2019. Not because my values changed, but because I lost faith in the community's ability to identify and recognize (effective) ways of doing good.
I'm not planning to write anything on this forum anymore unless asked. If you so desire, you can contact me at u/beth-zerowidthspace on Reddit.
James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Recommended reading for anyone who wants to use rational thought to do good, with a bunch of case studies where it failed miserably as well as a theory on what went wrong.
Aph Ko and Syl Ko, Aphro-ism: Essays on Pop Culture, Feminism, and Black Veganism from Two Sisters. If directly useful thought is like coding up new features, then building theory is like clearing technical debt. The Ko sisters build some quality theory on veganism and intersectional feminism. Not being used to such texts, I found it a hard book to understand. I've probably listened through it 5 times by now.
Cathy O'Neil, Weapons of Math Destruction. Or any other book on the ethics of algorithms really, there are a number of them out there. If you have a STEM degree, you likely didn't learn about the very real ethical problems that you'd happen upon in your career, which will result in your not recognizing them as such. My formal education, for example, never got further than "don't lie, don't p-hack", which is really very shallow. The value of improving your knowledge on this topic should be self-evident.