BrownHairedEevee

Machine Learning Engineer @ PayPal
Working (0-5 years experience)

Bio

I'm a machine learning engineer on a team at PayPal that develops algorithms for personalized donation recommendations (among other things). Before this, I studied computer science at Cornell University. I also manage the Effective Public Interest Computing Slack (join here).

Obligatory disclaimer: My content on the Forum represents my opinions alone and not those of PayPal.

I also offer copyediting and formatting services to members of the EA community for $15-35 per page, depending on the client's ability to pay. DM me for details.

I'm also interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her, ella, 她, 彼女

Links:

"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh

Sequences
6

EA Public Interest Tech - Career Reviews
Longtermist Theory
Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series

Comments
609

Topic Contributions
104

Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.

I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.

Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

If it's true that longtermism is much more controversial than focusing on x-risks as a cause area (which can be justified according to mainstream cost-benefit analysis, as you said), then maybe we should have stuck to promoting mass market books like The Precipice instead of WWOTF! The Precipice has a chapter explicitly arguing that multiple ethical perspectives support reducing x-risk.

Can we put this page in the sidebar?

I noticed that adding a tag to a post in draft mode now automatically adds the parent tag. But it's not clear to the user why two tags are being added at once. This also contributes to the overtagging of posts.

On Wikipedia, the guideline is to tag pages with the most specific categories they belong to. So if category B is a child of category A, then pages that belong to both A and B should only be tagged with B, whereas pages in A \ B should only be tagged with A.

In general, I think the EA Forum should be more thoughtful about tags. If we want to replicate what Wikipedia does, one possible approach is to automatically remove a parent tag from a post when a user adds a child tag to that post. However, this messes with the voting mechanism of tags. A less disruptive approach is to hide parent tags by default when both the child and parent are added to a post (or hide the child tags if the parent is a white tag), and then allow the user to expand the full list of tags.

Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.

That said, the EA x-risk community has discussed vulnerabilities before: Bostrom's paper "The Vulnerable World Hypothesis" proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.

There are two orgs that recommend effective charities for climate change in general:

Founders Pledge focuses on the "triple challenge" of climate change, air pollution, and energy poverty. If you're interested in donating to address both climate change and energy poverty, I recommend giving to the FP Climate Change Fund or FP's recommended climate charities. This includes CATF, which Karthik recommended, but also other organizations like TerraPraxis and Future Cleantech Architects.

I have thought of it but it wasn't a priority for me at the time.

Gitcoin has retired their original grants platform, but they're replacing it with a new decentralized grants protocol that anyone can use, which will launch in early Q2, 2023. I would like to wait until then to use that.

Thanks for clarifying. Yes, I think EA should (and already does, to some extent) give practical advice to people who prioritize the interests of their own community. Since many normies do prioritize their own communities, doing this could help them get their feet in the door of the EA movement. But I would hope that they would eventually come to appreciate cosmopolitanism.

As for traditionalism, it depends on the traditional norm or institution. For example, I wouldn't be comfortable with someone claiming to represent the EA movement advising donors on how to "do homophobia better" or reinforce traditional sexual norms more effectively, as I think these norms are bad for freedom, equality, and well-being. At least the views we accommodate should perhaps not run counter to the core values that animate utilitarianism.

Load More