I was doing the 80,000 hours career guide, but I suppose it's too ambitious for me. I just want to work for an org with a good altruistic mission, not completely maximize my impact. What's the career advice for that? I've been doing full-stack web dev for 11 years now, so I've learned a thing or two about running big complicated projects, at least in the software world, but I think this transfers, since complexity is complexity.
I looked at the orgs listed in the software dev career path, but they didn't seem very inspiring. I'm open to going back to college, but I wouldn't be sure for what. I hear EA still has an operations bottleneck, but it doesn't seem like that's something you can study for, and I'm not sure if transitioning my career to management (a tricky move, as I haven't really gunned for leadership, though I have been the lead at times) would enable a jump to operations later on.
You know, you don't have to oscillate between the extremes of fundamentalist Christianity and atheism. I find the materialist account of reality doesn't actually make that much sense when you start poking at it, leaving open the possibility of spirituality. Perhaps you would get something out of reading things like the Tao Te Ching, the Bhagavad Gita, and the Dhammapada, to balance out rationalistic atheism.
There is no one posture that has all the answers.
Thank you for this post!
I've been kinda following the 80,000 hours career guide, but I don't think it's what I'm looking for. Ultimately, it clarified that what I really want is to work for an org with a good mission. I'm a software developer, and I'm very motivated, signed the Giving What We Can pledge and everything.
I checked out their profile on software engineering, but surely there are more orgs out there that need software devs no?
I've also been thinking I could be good at an operations role, but it's very unclear how to get on that ladder.
The privacy concerns seem more realistic. A rogue superintelligence will have no shortage of ideas, so 2 does not seem very important. As to biasing the motivations of the AI, well, ideally mechanistic interpretability should get to the point we can know for a fact what the motivations of any given AI are, so maybe this is not a concern. I guess for 2a, why are you worried about a pre-superintelligence going rogue? That would be a hell of a fire alarm, since a pre-superintelligence is beatable.
Something you didn't mention though: how will you be sure the LLM actually successfully did the task you gave it? These things are not that reliable: you will have to double-check everything for all your use cases, making using it kinda moot.
You might want to read this is as a counter to AI doomerism: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case
This for a way to contribute to solving this problem without getting into alignment:
and this for the case that we should stop using neural networks:
I'm looking for statistics on how doable it is to solve all the problems we care about. For example, I came across this: https://www.un.org/sustainabledevelopment/wp-content/uploads/2018/09/Goal-1.pdf from the UN which says extreme poverty could be sorted out in 20 years for $175 billion a year. That is actually very doable, in light of the fact of how much money can go into war (in 1945, the US spent 40% of its GDP into the war). I'm looking for more numbers like that, e.g. how much money it takes to solve X problem.
I intend to use them for a post on how there is no particular reason we can't declare total war on suffering. We can totally organize massively to do great things, and we have done it many times before. We should have a wartime mobilization for the goal of ending suffering.
The core insight of Buddhism, that everything arises and passes away, is helpful here. Low hope is also something that arises and passes away if you let it. There's no need to cling to it. Just let it go and keep plugging away.
If there's anything useful to meditation, it is realizing at a deep level how everything arises and passes away, though I think you need to sit for an hour a day for some time before it really sinks in.
Hi Felix, thanks for the recs! What I mean by giving to charity not being exactly rational, is that giving to charity doesn't help one in any way. I think it makes more sense to be selfish than charitable, though there is a case where charity that improves ones community can be reasonable, since an improved community will impact your life.
And sure, one could argue the world is one big community, but I just don't see how the money I give to Africa will help me in any way.
Which is perfectly fine, since I don't think reason has a monopoly on truth. There are such things as moral facts, and morality is in many ways orthogonal to reason. For example, Josef Mengele's problem was not a lack of reason, his was a sickness of the heart, which is a separate faculty that also discerns the truth.
Nice to meet you! Also a new guy. Good to see you're a witch, I'm a mystic! A burn event is a copy of Burning Man? Definitely would like to go to one of those.