TLDR: Effective Altruism & 80,000 Hours inspired me to want to help others and rethink how to do that effectively. But my mistakes, largely driven by my insecurities, have limited my impact.
I’ve been through a lot of phases in my life. Some of them are relatively common. I had the “terrible twos.” I was a video games nerd as a teen. I drank a lot in college. I wanted to make it big in tech in my mid-20s. Some of my phases were a little less common. I was obsessed with reality TV when I was in college. I was really into trains as a kid. Maybe that one’s fairly unusual, but I’d imagine many kids love Thomas the Tank Engine.[1]
Now I’m going through a “phase” where a community I found on the internet has changed my life. That sort of already happened during my video games phase when I posted over 7,000 messages on 1up.com.[2] But, that site mainly changed my life by leading me to procrastinate.
This time is different.
Making It In Silicon Valley
On January 31, 2016, I decided to learn web development. I needed to build the MVP of Holla. Holla was a marketplace for informational interviews.[3]
By the end of February 2016, Holla had pivoted into a social network for making friends. By the end of March 2016, it was dead. I didn’t know what to do, so I continued learning web development. I figured I’d found my startup later.
In June 2016, I found my new idol, Haseeb Qureshi. To me, he was the Michael Jordan of coding bootcamp grads. Out of bootcamp, he got a $250k annual salary from AirBnB. I listened to his appearance on Software Engineering Daily and heard him mention effective altruism (EA). My notes on that podcast define EA as the idea you make as much money as possible and give it away effectively.
I realized I had no idea what I’d do with my future startup fortune. Giving it away effectively sounded like the right thing to do.
Shortly afterward, I took notes on another podcast with Haseeb. He mentioned EA again.
But I didn’t actually look into effective altruism until October 1, 2017. I’d bookmarked Haseeb’s recommendation for the website of an EA-affiliated organization called 80,000 Hours.
80,000 Hours
My memory was that the 80,000 Hours career guide changed my life immediately. But my old diary summary indicates that I was a little more measured about it.
I was right to be optimistic.
Part 1 of the guide helped me think about what I wanted in a job and how much money affected my happiness.
Part 2 convinced me I should define my success based on what would happen if I didn’t exist, my counterfactual impact, rather than what I did.
It also corrected my misperception that EA was only about being effective through donating money.
Part 3 led me to appreciate my good fortune more effectively than any progressive could.
It also helped me appreciate how powerful advocacy could be. It pointed out that if you convince someone to donate money to charity, that’s just as effective as donating the same amount yourself.
Part 4 and Part 5 explained why 80,000 Hours thought the biggest problems in the world were causes I’d never thought about, such as preventing pandemics and AI alignment (aka AI safety).[4] And they introduced me to “meta-causes” such as global priorities research.[5]
My other significant takeaways were more philosophical.
80,000 Hours made me aware of factory farming. I realized I cared about animal suffering. Why should only humans matter? Wouldn’t it be speciesism to feel that way?
It also led me to think about the long-term future. Should I care as much about someone alive in 1000 years as someone alive now? I don’t see why not.
And the career guide’s exercises inspired me to think about what I really wanted to do.
As you can see above, I stuck with software engineering. While there was some Mark Zuckerberg 2020 presidency hype at the time, I was too scared to switch directions. I’d spent a long time learning software engineering and I’d started my first job in the field two months earlier. I felt like I had to focus on remaining employed.[6]
EA Global
I reread 80,000 Hours again in December 2017. I decided I was going to be the ultimate effective altruist. But I still didn’t feel like I should switch careers.
I can’t exactly remember what my emotional state was at the time. It was something like, “I need to learn to be an adult before I can change the world.” I convinced myself the best things to do were activities such as learning to cook and iron pants. I found myself googling things such as "What’s the best shampoo?"[7] and "How often should you wash your towels?"[8] One day I helped clean my coliving space for free.[9]
From 80,000 Hours, I learned that an Effective Altruism conference, EA Global, would be held a few blocks away from my house in San Francisco. I applied to volunteer at the conference and I was accepted.
I went into the conference hoping it would set me on the path to changing the world!
That’s not really true. Per my diary summary, I was hoping to find a girlfriend there.
I didn’t realize that many people were coming to this conference from around the world and that the EA community was 70% male.[10]
The first night of the conference was okay. I remember one of the first people I met suggested I go home and listen to every 80,000 Hours podcast episode about AI alignment before returning to the conference.[11] He said at that point, I’d know more about the subject than half the people there. I’d guess he was well-intentioned, but I interpreted it as “You’re useless.” I ignored his advice.
The rest of the conference went better. I met plenty of interesting, like-minded people. I learned about utilitarianism, the repugnant conclusion (which I don’t think is repugnant), and the Fermi paradox for the first time. I didn’t find a girlfriend, but I met my future roommate, James Ingallinera.
Being Effective?
After the conference, I decided I was ready to do something more effective than find the best shampoo. I decided to go vegan.
When I told people I was becoming vegan, many of them followed up with the question, “Why are you going vegan?” I responded with things along the lines of “Factory farming is bad, I watched a VR movie where pigs get slaughtered, and it helps the environment too.” I typically didn’t mention that people I found online who call themselves effective altruists think it’s a good idea.
I received some ridicule for becoming vegan. I was told people are meant to eat animals. A few people acted as if there would be serious consequences to my health.
I was skeptical veganism is the perfect diet, but I believed that I could still live a long, healthy life as a vegan. Yet the comments still affected me. And my stomach didn’t feel good either.
I found this forum post by Gregory Lewis, who I’d met at EA Global. It claimed that being vegan was equivalent to donating forty-six cents per year to The Humane League. He claimed he could’ve chosen two more effective animal charities than The Humane League too.
I was probably already looking for reasons to quit veganism. That article put me over the top.
My roommate accidentally threw out my food too. So I quit a little faster than I’d planned.
I continued to loosely follow EA, but it didn’t affect my life much. I felt it was too late to change my career at age 27. It would take forever to learn about something like AI alignment on the side.
Plus, maybe I’d end up deciding that it wasn’t worth doing after all, like going vegan.
I applied to go to EA Global SF again in 2019, but I was rejected. I accidentally snuck into the conference during its last day and went to the afterparty.[12] I had fun, but I wasn’t inspired to do anything “effective” or “altruistic.” I got rejected from the 2020 conference too.[13]
Still Making It In Silicon Valley
In 2020, I got the courage to start another startup, Fantasy Sports Poker. It was a mobile game that added fantasy sports onto Texas Hold Em. I knew the idea was worth trying. Plenty of people like both poker and fantasy sports. And mobile social (play money) poker is at least a $250 million market. Other people seemed to like the idea too.
Working on the startup was hard. As a solo founder, I had to do everything. It was a slog, but it wasn’t soul-sucking. I was excited to take on every challenge and face all the risks and rewards. I slowly made progress. I pushed myself to solve harder problems than I ever had before. I solved obscure problems related to combining poker and fantasy sports that nobody had solved before! I was determined to get my app done.
It seemed like I had one major obstacle. Constipation. I was scared to go outside because I was afraid I’d have to use the bathroom. I narrowly avoided an unfortunate incident in Dolores Park. I’d guess I was spending at least an hour on the toilet per day.
During those miserable times, I read the EA Forum and LessWrong, the forum of the EA-adjacent rationality community. Now that I’d followed EA for 3 years, I could understand more of the posts on these forums. They gradually became my favorite sites on the web.
I was wrong about my main obstacle. Thankfully, the low FODMAP diet resolved my constipation. Unfortunately, the startup kept on getting harder. In the process of solving one problem, I’d notice there were five new tasks I hadn’t even realized I needed to do. My planned launch dates kept on getting pushed back. Doubts began to pile up. But I persisted.
EA Picnic
I went to the 2021 EA SF picnic with low expectations. I assumed I’d have fun and get nothing else out of it.
I was right that I'd have fun. I heard weird theories like humor will destroy the world. I may have accidentally convinced someone to start a company to literally survey animals.
I was wrong that I'd get nothing else out of it.
The picnic was abstractly helpful because it led me to believe that I could do “effective” work. I still didn’t know much about AI Safety or any EA cause area, but I generally understood the gist of what people were saying. I felt like I belonged.
The picnic was also helpful because someone at the afterparty encouraged me to reach out to a game designer I admired, Zvi Mowshowitz. As I was preparing my questions, I already started to notice that I’d made a huge incorrect assumption about my game design. Zvi recognized it before I’d even asked him about it. He was direct with me. He didn’t think my game sounded fun.
Eventually, I had to face reality. I wasn’t as naive as I was when I started Holla, but I was still making a lot of mistakes. I think I could’ve recovered from them, but it would’ve been a struggle.
More importantly, I’d lost the motivation to work on Fantasy Sports Poker. I hadn’t played any fantasy sports or mobile games for years. I’d barely played poker. I no longer believed Fantasy Sports Poker was the right project for me.[14]
I started Fantasy Sports Poker because it was the boldest thing I could get myself to do. But Silicon Valley engineer tries to start a startup is a story everyone has heard. In a way, it’s a conservative risk.
Letting My Identities Go
When I first read 80,000 Hours, I was too quick to adopt an identity as an effective altruist.
I primarily went vegan because I thought it was the effective altruist thing to do. And when I quit being vegan, I recognized that things are complicated, but I used that as an excuse to stick with the status quo.
I expect EA to continue to inform and inspire me. But I don’t want to be afraid to stand out from the EA community or anyone else. I want to be myself.[15]
Letting My Fear Go
I’m 30 years old now. But I’m only getting older. It’s time to face my fears and fulfill my potential to achieve whatever I want to do.
If my ambitions ever seem silly to people, that’s fine. If I learn that I’ve overestimated my potential, that’s fine too. I’d rather be humbled than have a lingering feeling in the back of my mind that I’m playing things too safe.
That doesn’t mean I want to be too risky either. I doubt I’ll ever feel 100% rational, but I think I can get to the point where I’m comfortable enough with my doubts.
It’s time to find out!
(cross-posted from my blog: https://utilitymonster.substack.com/p/my-effective-altruism-story)
- ^
And the NYC subway map is mesmerizing!
- ^
I would share them now, but they seem to have been deleted.
- ^
- ^
Granted, I probably would’ve decided pandemics were important by the end of March 2020.
- ^
Meta-cause refers to a cause that helps solve a more direct cause or causes. There’s no precise line defining what makes something a cause or a meta-cause.
- ^
80,000 Hours now refers to what I read as their 2017 (i.e., old) career guide. They now recommend newcomers read this introduction to their content and then read their key ideas page. However, I’d guess I wouldn't have been as interested in the new material in 2017. I think I liked that the original guide has the tone of a self-improvement book. The new material quickly gets into comparing global problems.
- ^
I never figured it out.
- ^
I remember I decided I should do it after every 3 uses. But I never actually did that.
- ^
Don’t count on that happening again.
- ^
This still seems to be true.
- ^
That was possible (~10 hours of content) at the time.
- ^
I found out how I got into the conference the next day.
- ^
Due to covid, it became a virtual conference. Anyone was allowed to attend parts of the event.
- ^
I do still think the idea is worth testing. Feel free to reach out to me if you’re interested in working on it.
- ^
When I say “I want” and “be myself,” I’m referring to how the majority of me wants to act as I’m writing right now. There’s always a figurative part of me that feels scared and wants to conform. And within myself I have a lot of inner conflicts.
Thanks for sharing ! I really loved reading this and it's inspired me (and hopefully) others to share more EA 'origin' stories.