N

NicholasKross

Software Engineer in Test @ Solu/Paychex

Bio

Participation
3

Software engineer, blogging and editing at /nickai/. PM me your fluid-g-increasing ideas. (Formerly President/VP of the EA Club at RIT (NY, USA)).

How others can help me

Looking for opportunities to do technical and/or governance work in AI alignment/safety.

How I can help others

Can help with some level of technical analysis (and I'm improving!), strategic planning, operations management, social media marketing, graphic design.

Comments
90

I'm a Definooooor! I'm gonna Defiiiiiiine! AAAAAAAAAAAAAAAA

I like circles, though my favorites are (of course) boxes and arrows.

TIL that a field called "argumentation theory" exists, thanks!

Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I'd phrase them, but maybe that's to be expected(?)

Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.

(There's a question of who would actually read those responses, and correspondingly where they'd be published, but that's a key question that all persuasive-media-creators should be answering anyway.)

Yeah I get that, I mean specifically the weird risky hardcore projects. (Hence specifying "adult", since that's both harder and potentially more necessary under e.g. short/medium AI timelines.)

Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.

Why hasn't e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing

There's a new chart template that is better than "P(doom)" for most people.

Have long hoped someone would do this thoroughly, thank you.

Load more