Software engineer, blogging and editing at /nickai/. PM me your fluid-g-increasing ideas. (Formerly President/VP of the EA Club at RIT (NY, USA)).
Looking for opportunities to do technical and/or governance work in AI alignment/safety.
Can help with some level of technical analysis (and I'm improving!), strategic planning, operations management, social media marketing, graphic design.
Much cheaper, though still hokey, ideas that you should have already thought of at some point:
Maybe! I'm most interested in math because of its utility for AI alignment and because math (especially advanced math) is notoriously considered "hard" or "impenetrable" by many people (even people who otherwise consider themselves smart/competent). Part of that is probably lack of good math-intuitions (grokking-by-playing-with-concept, maths-is-about-abstract-objects, law-thinking, etc.).
Yeah, we'd hope there's a good bit of existing pedagogy that applies to this. Not much stood out to me, but maybe I haven't looked hard enough at the field.
We ought to have a new word, besides "steelmanning", for "I think this idea is bad, but it made me think of another, much stronger idea that sounds similar, and I want to look at that idea now and ignore the first idea and probably whoever was advocating it".
This post/cause seems sorely underrated; e.g. what org exists can someone donate to, for mass case detection? It has such a high potential lives-saved-per-$1,000!
OK, thanks! Also, after more consideration and object-level thinking about the questions, I will probably write a good bit of prose anyway.
Have long hoped someone would do this thoroughly, thank you.