Co-president of Stanford EA and Stanford AI Alignment; previous organizer at the Stanford Alt Protein Project. B.S. Computer Science 2018-2023. Most interested in AI safety and animal welfare.
AI alignment/safety community building: I'm starting a Stanford AI Alignment club out of Stanford Effective Altruism, how should we operate this club and its activities over the near year (to start) in order to do the most good?
General community building: How can I help improve Stanford Effective Altruism? What does CB look like outside university?
Empirical AI alignment work: What does it look like from a variety of perspectives (maybe not just Redwood and Anthropic)? Does my career plan for skilling-up look solid? Should I try to go to grad school?
Animal welfare: What are the latest promising strategies for farmed animal welfare? What do we do about wild animal welfare?
Can talk about organizing and participating in university groups, particularly Stanford Effective Altruism, Stanford AI Alignment, and the Stanford Alt Protein Project. Generally tied into the Bay-area EA and AI alignment communities. Have been upskilling and helping a few peers upskill in machine learning skills for working on empirical AI safety.
Ha thanks Vael! Yeah, that seems hard to standardize but potentially quite useful to use levels like these for hiring, promotions, and such. Let me know how it goes if you try it!
Thanks! Forgot about cloud computing, added a couple of courses to the Additional Resources of Level 4: Deep Learning.
Oh lol I didn't realize that was a famous philosopher until now, someone commented from a Google account with that name! Removed Ludwig.
Sure!
Good find, added!
Thanks for sharing your experiences, too! As for transformers, yeah it seems pretty plausible that you could specialize in a bunch of traditional Deep RL methods and qualify as a good research engineer (e.g. very employable). That's what several professionals seem to have done, e.g. Daniel Ziegler.
But maybe that's changing, and it's worth it to start learning things. It seems like most of the new RL papers incorporate some kind of transformer encoder in the loop, if not basically being a straight-up Decision Transformer.
Thanks, that's a good point! I was very uncertain about that, it was mostly a made-up number. I do think the time to implement an ML paper depends wildly on how complex the paper is (e.g. a new training algorithm paper necessitates a lot more time to test it than a post-hoc interpretability paper that uses pre-trained models) and how much you implement (e.g. rewrite the code but don't do any training vs evaluate the key result to get the most important graph vs try to replicate almost all of the results).
I now think my original 10-20 hours per paper number was probably an underestimate, but it feels really hard to come up with a robust estimate here and I'm not sure how valuable it would be, so I've removed that parenthetical from the text.
I'll also plug Microsoft Edge as a great tool for this: There's both a desktop browser and a mobile app, and it has a fantastic built-in Read Aloud feature that works in both. You just click the Read Aloud icon or press Ctrl/Cmd+Shift+U on a keyboard and it will start reading your current web page or document out loud!
It has hundreds of neural voices (Microsoft calls them "Natural" voices) in dozens of languages and dialects, and you can change the reading speed too. I find the voices to be among the best I've heard, and the super low activation energy of not having to copy-paste anything or switch to another window means I use it much more often than when I tried apps like Neural Reader.
Sidenote, but as a browser, since it's Chromium-based it's basically the same as Google Chrome (you can even install extensions from the Chrome Web Store) but with slightly less bloat and better performance.
They just added to it so it's now "Is Civilization on the Brink of Collapse? And Could We Recover?" but it still seems to not answer the first question.
Thanks for this post! I appreciate the transparency, and I'm sorry for all this suckiness.
Could one additional easyish structural change be making applications due even earlier for EAGx? I feel like the EA community has a bad tendency of having apps for things open until very soon before the actual thing, and maybe an earlier due date gives people more time to figure out if they're going and creates more buffer before catering number deadlines. Ofc, this costs some extra organizer effort as you have to plan more ahead, but I expect that's more of a shifting thing rather than an whole lot of extra work.