I'm a machine learning engineer on a team at PayPal that develops algorithms for personalized donation recommendations (among other things). Before this, I studied computer science at Cornell University. I'm especially interested in s-risks, AI safety, and using ML to solve the world's most pressing problems.
Obligatory disclaimer: My content on the Forum represents my opinions alone and not those of PayPal.
I'm also interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.
A comment I've written about my EA origin story
Pronouns: she/her
Links:
"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh
I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:
Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.
Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.
Yes! In chapter 6 of The Precipice, Toby Ord talks about prioritizing risks that are more urgent, a.k.a. "soon, sudden, and sharp":
Outside of x-risks, I've operationalized the "urgency" of problems as something I called stickiness, or the rate at which they are expected to grow or shrink over time:
When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.
A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.
For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.
On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become competitive with animal products. Therefore, farm animal suffering has positive stickiness. (I would expect wild animal suffering to also have positive stickiness due to increased habitat destruction, but I don't know.)
Wow, I love the new theme! 🤩
What's the new font called?
Lobby/consult with PF on making effective grants. Givewell does the hard job of evaluating charities, but a more boutique solution could be useful to private foundations.
This is how Open Philanthropy got started!
"While it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community members' personal career choices unless those individuals ask them for feedback" isn't specific to "people who work on AI safety at large AI labs"?
That's true. It applies to a wide range of career decisions that could be considered "harmful" or suboptimal from the point of view of EA, such as choosing to develop ML systems for a mental health startup instead of doing alignment work. (I've actually been told "you should work on AI safety" several times, even after I started my current job working on giving tech.)
Rereading your post, it does make sense now that you were thinking of safety teams at the big labs, but both the title about "selling out" and point #3 about "capabilities people" versus "safety people" made me think you had working on capabilities in mind.
Yes! I realize that "capabilities people" was not a good choice of words. It's a shorthand based on phrases I've heard people use at events.
I think it depends a lot on the number of options the person has. Many people in the tech community, especially those from marginalized groups, have told me that they don't have the luxury to avoid jobs they perceive as harmful, such as many jobs in Big Tech and the military. But I think that doesn't apply to the case of someone applying to a capabilities position at OpenAI when they could apply literally anywhere else in the tech industry.
Thank you for explaining your position. Like you, I am concerned that organizations like OpenAI and the capabilities race they've created have robbed us of the precious time we need to figure out how to make AGI safe. However, I think we're talking past each other to an extent: importantly, I said that we mostly shouldn't criticize people for the organizations they work at, not for the roles they play in those organizations.
Most ML engineers have a lot of options of where to work, so choosing to work on AI capabilities research when they have a lot of alternatives outside of AI labs seems morally wrong. (On the other hand, given that AI capabilities teams exist, I'd rather they be staffed by engineers who are concerned about AI safety than engineers who aren't.) However, I think there are many roles that plausibly advance AI safety that you could only do at an AI lab, such as promoting self-regulation in the AI industry. I've also heard arguments that advancing AI safety work sometimes requires advancing AI capabilities first. I think this was more true earlier: GPT-2 taught the AI safety community that they need to focus on aligning large language models. But I am really doubtful that it's true now.
In general, if someone is doing AI safety technical or governance work at an AI lab that is also doing capabilities research, it is fair game to tell them that you think their approach will be ineffective or that they should consider switching to a role at another organization to avoid causing accidental harm. It is not acceptable to tell them that their choice of where to work means they are "AI capabilities people" who aren't serious about AI safety. Given that they are working on AI safety, it is likely that they have already weighed the obvious objections to their career choices.
There is also risk of miscommunication: in another interaction I had at another EA-adjacent party, I got lambasted after I told someone that I "work on AI". I quickly clarified that I don't work on cutting-edge stuff, but I feel that I shouldn't have had to do this, especially at a casual event.
Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.
I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.
Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.