Gabriel Mukobi

Organizer @ Stanford Effective Altruism
Pursuing an undergraduate degree
281Stanford, CA, USAJoined Feb 2022

Bio

Organizer at Stanford EA, Stanford AI Alignment, and the Stanford Alt Protein Project. B.S. Computer Science 2018-2023. Most interested in AI safety and animal welfare. gabrielmukobi.com

How others can help me

AI alignment/safety community building: I'm starting a Stanford AI Alignment club out of Stanford Effective Altruism, how should we operate this club and its activities over the near year (to start) in order to do the most good?

General community building: How can I help improve Stanford Effective Altruism? What does CB look like outside university?

Empirical AI alignment work: What does it look like from a variety of perspectives (maybe not just Redwood and Anthropic)? Does my career plan for skilling-up look solid? Should I try to go to grad school?

Animal welfare: What are the latest promising strategies for farmed animal welfare? What do we do about wild animal welfare?

How I can help others

Can talk about organizing and participating in university groups, particularly Stanford Effective Altruism, Stanford AI Alignment, and the Stanford Alt Protein Project. Generally tied into the Bay-area EA and AI alignment communities. Have been upskilling and helping a few peers upskill in machine learning skills for working on empirical AI safety.

Comments
20

Ha thanks Vael! Yeah, that seems hard to standardize but potentially quite useful to use levels like these for hiring, promotions, and such. Let me know how it goes if you try it!

Thanks! Forgot about cloud computing, added a couple of courses to the Additional Resources of Level 4: Deep Learning.

Oh lol I didn't realize that was a famous philosopher until now, someone commented from a Google account with that name! Removed Ludwig.

Thanks for sharing your experiences, too! As for transformers, yeah it seems pretty plausible that you could specialize in a bunch of traditional Deep RL methods and qualify as a good research engineer (e.g. very employable). That's what several professionals seem to have done, e.g. Daniel Ziegler.

But maybe that's changing, and it's worth it to start learning things. It seems like most of the new RL papers incorporate some kind of transformer encoder in the loop, if not basically being a straight-up Decision Transformer.

Thanks, that's a good point! I was very uncertain about that, it was mostly a made-up number. I do think the time to implement an ML paper depends wildly on how complex the paper is (e.g. a new training algorithm paper necessitates a lot more time to test it than a post-hoc interpretability paper that uses pre-trained models) and how much you implement (e.g. rewrite the code but don't do any training vs evaluate the key result to get the most important graph vs try to replicate almost all of the results).

I now think my original 10-20 hours per paper number was probably an underestimate, but it feels really hard to come up with a robust estimate here and I'm not sure how valuable it would be, so I've removed that parenthetical from the text.

I'll also plug Microsoft Edge as a great tool for this: There's both a desktop browser and a mobile app, and it has a fantastic built-in Read Aloud feature that works in both. You just click the Read Aloud icon or press Ctrl/Cmd+Shift+U on a keyboard and it will start reading your current web page or document out loud!

It has hundreds of neural voices (Microsoft calls them "Natural" voices) in dozens of languages and dialects, and you can change the reading speed too. I find the voices to be among the best I've heard, and the super low activation energy of not having to copy-paste anything or switch to another window means I use it much more often than when I tried apps like Neural Reader.

Sidenote, but as a browser, since it's Chromium-based it's basically the same as Google Chrome (you can even install extensions from the Chrome Web Store) but with slightly less bloat and better performance.

They just added to it so it's now "Is Civilization on the Brink of Collapse? And Could We Recover?" but it still seems to not answer the first question.

Thanks for building this, def seems like a way to save a lot of organizer time (and I appreciate how it differentiates things from a Bible group or a cult)!

To me, it seems like the main downside will be the lack of direct engagement between new people and established EAs. In a normal reading group, a participant meets and talks with a facilitator on day 1, and then every week between every 1-3 hours of EA-related reading. In this system, it seems like they don't really get to meet and talk with someone until they go through a significant amount of independent exploration and write a reflection, and I wonder if the combination of that high required activation energy with little human-to-human guidance might cause you to lose some potentially good students as you go from the predicted 40 to 20.

You could try offering "cheaper" 1:1s to these people early, but that seems less efficient than having several of them in a weekly reading group discussion which would defeat the point. That's not to say I don't think this is the right move for your situation. Just that I'm extra curious about how this factor might play out, and I'm excited for you to test this system and share the results with other groups!

Load More