Hello Effective Altruists,
I am familiar with the ideas of Effective Altruism as I have read the 80,000 Hours career guide. I think it is a great guide and it definitely put a new perspective on the way I view my career.
A bit of my background:
I have a master's degree in computer science. I am currently working remotely as a machine learning engineer.
Here is a list of the things that I am looking for in my career, ordered from most important to least important:
- Remote work
- High salary
- Impact
Maybe I'm not the paragon of Effective Altruism values, but if I'm being honest, I value remote work and high salary more than impact. Impact has the 3rd place, but it is still a factor.
Now onto my question:
A few years ago I read Superintelligence and got scared that AGI might make humanity go extinct. I then started focusing on machine learning and after graduating I ended up as a machine learning engineer, where I'm working currently.
Recently, however, I began questioning whether what I was doing is the right thing to do impact-wise. I believe blockchain to be a great technology as well (even though we are in a bubble right now). Fundamentally, I think blockchain is going to bring "power to the people" and I think that's great. It's got it weaknesses now, sure, but over time I think they'll get ironed out.
Here are my top three reasons why I think I should switch to blockchain:
- Given my strong remote work preferences, I don't think I will make any impact in anything AI safety related. I think that the main discoveries are being made in companies such as OpenAI and DeepMind and they all require going to the office. Since I don't want to go to the office (my remote work preference is higher than my impact preference), I don't think I will be a part of a team that reaches a fundamental breakthrough. With blockchain, on the other hand, most jobs are remote and I could therefore contribute more.
- I am not 100% convinced that AI safety is an existential risk. There are some indications toward this (such as this one), but I think that it may very well be that worrying about AGI safety (as in it's an existential risk for all humans) is the same as worrying that aliens will come and destroy Earth or something similar. I am not denying the problems with current AI systems, but what I am saying is that I don't see a clear path to AGI and I think there's a lot of hand waving that goes on when talking about AGI safety at this point in time.
- One could make the argument that I should do machine learning engineering jobs and wait for AI safety related jobs to become remote. I would then be working on making some AI system safe. Here's the problem with this perspective: I'm not sure when we will come to a point where there are remote AI safety jobs available. What if there's no fundamental breakthrough in AI for another 30-40 years and I keep working on some non-AI safety related remote machine learning jobs to "keep my skills sharp in case they're needed", only to find myself never using them on actual AI safety problems.
Fundamentally, the only reason I'm interested in AI is because of AGI safety. And right now I'm not sure that AGI safety is a real existential threat and even if it is, given my remote work preferences I will probably have low to no impact on AGI safety. Blockchain, on the other hand, is already changing and will most likely continue to change the way we use the internet and is much more remote friendly.
What are your 2 cents? I'd like to bounce off perspectives off of others to see if I'm missing anything in my train of thought.
P.S. I cross-posted this on LessWrong to get more perspectives.
Hey, would recommend reading a bit more of the 80k materials https://80000hours.org/
Or starting here https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/
Of course you're free to do whatever you want with your career, but the standard EA advice is going to be to follow the 80k recommendations for high impact careers https://80000hours.org/career-reviews/