JJ Balisan

Machine Learning
43 karmaJoined Jun 2020Working (0-5 years)London, UK



Hi! I’m JJ. I helped found EA Swarthmore. Originally from London, England, I recently graduated with Honors in Cognitive Science and Math from Swarthmore College, PA. My research interests are focused on the areas of NLP, Machine Learning Theory, AI Safety, and Neural Net Compression. What I am currently working on GPT models at Cohere.ai I hope to spend the next few years building the EA Community and AI Safety Research


I would just probably tell people to work in another field than explicitly encouraging goodharting their way to trying to having positive impact in an area with extreme variance.

Allowing for selective shared list for post that may be drafts and or info hazards in a similar way in which I can do Facebook posts to close friends etc.

Programs like the OpenAI Residency may be a good idea. You may also want to consider something like Interning at somewhere like Deepmind, CHAI or Cohere. There is also alot of mentorship in the Eleuther Discord. We are in a time where highly skilled EA aligned engineers are very expensive in both time and money and under shorter timelines it may not make sense for any individual engineer to give up time on a program like this. If something like this didn't exist in 2/3 years time I would be very interested in running a program like this.

The first thing to do with safety that jumps at me is the correlation of gradients between different uses of scaling laws

Correct: Effective Animal Advocacy, probably a bit too intra-group jargon to not define or at least link to an explanation. ACE was originally called Effective Animal Activism.

One of the first places you might want to look is at the work of David Moss at Rethink Priorities and the EA Survey: https://www.rethinkpriorities.org/publications#easurvey