I'm a machine learning engineer on a team at PayPal that develops algorithms for personalized donation recommendations (among other things). Before this, I studied computer science at Cornell University. I'm especially interested in s-risks, AI safety, and using ML to solve the world's most pressing problems.
Obligatory disclaimer: My content on the Forum represents my opinions alone and not those of PayPal.
I'm also interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.
A comment I've written about my EA origin story
"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh
Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.
I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.
Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.
I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:
Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.
Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.
This has been discussed regarding intro fellowships:
- EAs should see fellowships as educational activities first and foremost, not just recruitment tools
Compare: in principle, it would be a good thing to farm short-lived happy humans (perhaps for their organs) who would otherwise not get to exist at all. But we find the idea repugnant, and that’s probably also a good thing. It causes us to lose out on some life-saving organs, and the value of the farmed lives themselves; but it may also prevent us from committing worse atrocities against each other.
This part reminded me of The Promised Neverland.
My thoughts: The conclusion that societies should save very large portions of their economic output is extreme, and I think we should be suspicious of it. The model assumes that economic output only depends on capital; more recent models have illuminated the important role of technological progress in driving economic growth. The paper "Optimum Growth When Technology is Changing" (Mirrlees, 1967) proposes a theory of optimal economic growth and savings rates using a model that incorporates technology and human capital; I can't access it, but I would be curious as to what it says. I suspect that the optimal savings rate would be much lower, because investing in innovation and human capital seems like a far more efficient way to promote economic growth than the brute-force approach of pouring large sums of money into capital accumulation.
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes, I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are promising from neartermist and longtermist perspectives overlap a lot, we tend to assume they don't overlap at all, because it's more surprising if the top longtermist causes are all different from the top neartermist ones. If the cost-effectiveness of different causes according to neartermism and longtermism are independent from one another (or at least somewhat positively correlated), I'd expect at least some causes to be valuable according to both ethical frameworks.
I've noticed this in my own thinking, and I suspect that this is a common pattern among EA decision makers; for example, Open Phil's "Longtermism" and "Global Health and Wellbeing" grantmaking portfolios don't seem to overlap.
Consider global health and poverty. These are usually considered "neartermist" causes, but we can also tell a just-so story about how global development interventions such as cash transfers might also be valuable from the perspective of longtermism:
Note that I'm not claiming that cash transfers are the most valuable interventions for longtermists. They're probably not, since any trend of sustained economic growth would likely run up against physical limits of the universe within the next few thousand years, and AGI is likely to render all other interventions to promote economic growth moot in the next few decades anyway. Interventions to reduce existential risk would probably have more impact over the long term (although I'm sympathetic to the argument in "Existential risk pessimism and the time of perils").
At least those of us who are 40 and under.
Notwithstanding growth rates. Rich countries like the United States probably sustain higher rates of economic growth and capital accumulation than poor countries because of stronger institutions that encourage investment. I'd like to see an economic model that could tell us which type of growth is more valuable in the long term, but I don't have the training in economics that one would need to create one.
See "This Can't Go On" by Holden Karnofsky for an argument against indefinite sustained growth, and "This Can Go On" by Dwarkesh Patel for a counterargument.
Great post, thanks for sharing these positions! I'm excited to apply.
What information should go on your resume for these roles, particularly the moderator role? Since my day job is software engineering, most of my experience related to content moderation is from stuff I've done on the side or in school.
I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Hope this helps!
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.