Machine Learning Engineer @ PayPal
Working (0-5 years experience)


I'm a machine learning engineer on a team at PayPal that develops algorithms for personalized donation recommendations (among other things). Before this, I studied computer science at Cornell University. I also manage the Effective Public Interest Computing Slack (join here).

Obligatory disclaimer: My content on the Forum represents my opinions alone and not those of PayPal.

I also offer copyediting and formatting services to members of the EA community for $15-35 per page, depending on the client's ability to pay. DM me for details.

I'm also interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her, ella, 她, 彼女


"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh


EA Public Interest Tech - Career Reviews
Longtermist Theory
Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series


Topic Contributions

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

Will the IP rights to submitted stories and any output derived from them be owned by CEA or the people who submit them? Under US and UK law, a transfer of copyright to CEA needs to be in writing, so this would have to be explicitly stated on the form.

Hey Aron, thanks for your post!

we should weight suffering today much more highly than suffering in the future, because only we can do something about it.

This can be framed in terms of both the importance, tractability and neglectedness (ITN) framework and the significance, persistence and contingency (SPC) framework.

Using the ITN framework, you might argue that suffering that occurs in the present is more neglected than suffering in the distant future because fewer people will ever be in a position to address it (only the present generation). By contrast, everyone from now until a given point in the future will be in a position to address suffering that occurs at that point in time. It is also more tractable because we can address it directly, whereas we can only address suffering in the future indirectly, e.g. by empowering future generations to address it when it occurs. (These considerations weigh against each other, though.)

Using the SPC framework, you might argue that suffering in the distant future is not very contingent on our actions in the present because people in the future will be able to address it regardless of what we do now.

These points are not fatal to longtermism, though. The idea that future people will be better positioned to address future problems is the basis of patient longtermism, "the view that individuals can have a greater positive impact by investing current altruistic resources and spending them later than by spending them now."

I like this idea. But why not set up welfare science, an interdisciplinary field including welfare economics, welfare biology, and positive psychology?

I was worried sick, I legit thought his plane had crashed on the way to EAG DC

We should have at least one dedicated "megathread" for EAG-related questions each year, so it's easier to ask such questions in public without creating dedicated posts for each of them.

Point taken, thanks! My original comment was mainly addressed at the fact that OP used "Phil", and I was not aware that Émile still uses "he" pronouns because their Twitter bio only says "they". I think the correction – using "Émile" while noting that Émile was "formerly known as Phil" to help others identify him – is satisfactory.

Please do not misgender Émile Torres. They may be a persona non grata in this community, but they still deserve to be called by their preferred name and pronouns like anyone else.

I think it depends on the countries that gain power. If South Africa or the EU becomes a great power I'd be less worried because they have liberal values. But the most likely candidates for great powers are China and India, and China is an outright dictatorship while India is slipping into one.

Load More